markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Let's first compute the regular correlation function. We'll need some radial bins. We'll also need to tell Corrfunc that we're working with a periodic box, and the number of parallel threads. Then we can go ahead and compute the real-space correlation function xi(r) from the pair counts, DD(r) (documentation here: https://corrfunc.readthedocs.io/en/master/api/Corrfunc.theory.html)
rmin = 40.0 rmax = 150.0 nbins = 22 r_edges = np.linspace(rmin, rmax, nbins+1) r_avg = 0.5*(r_edges[1:]+r_edges[:-1]) periodic = True nthreads = 1 dd_res = DD(1, nthreads, r_edges, x, y, z, boxsize=boxsize, periodic=periodic) dr_res = DD(0, nthreads, r_edges, x, y, z, X2=x_rand, Y2=y_rand, Z2=z_rand, boxsize=boxsize, periodic=periodic) rr_res = DD(1, nthreads, r_edges, x_rand, y_rand, z_rand, boxsize=boxsize, periodic=periodic)
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
We can use these pair counts to compute the Landy-Szalay 2pcf estimator (Landy & Szalay 1993). Let's define a function, as we'll want to reuse this:
def landy_szalay(nd, nr, dd, dr, rr): # Normalize the pair counts dd = dd/(nd*nd) dr = dr/(nd*nr) rr = rr/(nr*nr) xi_ls = (dd-2*dr+rr)/rr return xi_ls
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
Let's unpack the pair counts from the Corrfunc results object, and plot the resulting correlation function: (Note that if you use weights, you need to multiply by the 'weightavg' column.)
dd = np.array([x['npairs'] for x in dd_res], dtype=float) dr = np.array([x['npairs'] for x in dr_res], dtype=float) rr = np.array([x['npairs'] for x in rr_res], dtype=float) xi_ls = landy_szalay(nd, nr, dd, dr, rr) plt.figure(figsize=(8,5)) plt.plot(r_avg, xi_ls, marker='o', ls='None', color='grey', label='Standard estimator') plt.xlabel(r'r ($h^{-1}$Mpc)') plt.ylabel(r'$\xi$(r)') plt.legend()
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
Great, we can even see the baryon acoustic feauture at ~100 $h^{-1}$Mpc! Continuous-function estimator: Tophat basis Now we'll use the continuous-function estimator to compute the same correlation function, but in a continuous representation. First we'll use a tophat basis, to achieve the equivalent (but more correct!) result.We need to give the name of the basis as 'proj_type'. We also need to choose the number of components, 'nprojbins'. In this case, we want the components to be a tophat for each bin, so this will just be 'nbins'.
proj_type = 'tophat' nprojbins = nbins
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
Currently the continuous-function estimator is only implemented in DD(s,mu) ('DDsmu'), the redshift-space correlation function which divides the transverse direction s from the line-of-sight direction mu. But we can simply set the number of mu bins to 1, and mumax to 1 (the max of cosine), to achieve the equivalent of DD in real space.
nmubins = 1 mumax = 1.0
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
Then we just need to give Corrfunc all this info, and unpack the continuous results! The first returned object is still the regular Corrfunc results object (we could have just used this in our above demo of the standard result).
dd_res, dd_proj, _ = DDsmu(1, nthreads, r_edges, mumax, nmubins, x, y, z, boxsize=boxsize, periodic=periodic, proj_type=proj_type, nprojbins=nprojbins) dr_res, dr_proj, _ = DDsmu(0, nthreads, r_edges, mumax, nmubins, x, y, z, X2=x_rand, Y2=y_rand, Z2=z_rand, boxsize=boxsize, periodic=periodic, proj_type=proj_type, nprojbins=nprojbins) rr_res, rr_proj, qq_proj = DDsmu(1, nthreads, r_edges, mumax, nmubins, x_rand, y_rand, z_rand, boxsize=boxsize, periodic=periodic, proj_type=proj_type, nprojbins=nprojbins)
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
We can now compute the amplitudes of the correlation function from these continuous pair counts. The compute_amps function uses the Landy-Szalay formulation of the estimator, but adapted for continuous bases. (Note that you have to pass some values twice, as this is flexible enough to translate to cross-correlations between two datasets and two random catalogs.)
amps = compute_amps(nprojbins, nd, nd, nr, nr, dd_proj, dr_proj, dr_proj, rr_proj, qq_proj)
Computing amplitudes (Corrfunc/utils.py)
MIT
example_theory.ipynb
abbyw24/Corrfunc
With these amplitudes, we can evaluate our correlation function at any set of radial separations! Let's make a fine-grained array and evaluate. We need to pass 'nprojbins' and 'proj_type'. Because we will be evaluating our tophat function at the new separations, we also need to give it the original bins.
r_fine = np.linspace(rmin, rmax, 2000) xi_proj = evaluate_xi(nprojbins, amps, len(r_fine), r_fine, nbins, r_edges, proj_type)
Evaluating xi (Corrfunc/utils.py)
MIT
example_theory.ipynb
abbyw24/Corrfunc
Let's check out the results, compared with the standard estimator!
plt.figure(figsize=(8,5)) plt.plot(r_fine, xi_proj, color='steelblue', label='Tophat estimator') plt.plot(r_avg, xi_ls, marker='o', ls='None', color='grey', label='Standard estimator') plt.xlabel(r'r ($h^{-1}$Mpc)') plt.ylabel(r'$\xi$(r)') plt.legend()
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
We can see that we're getting "the same" result, but continously, with the hard bin edges made clear. Analytically computing the random term Because we're working with a periodic box, we don't actually need a random catalog. We can analytically compute the RR term, as well as the QQ matrix.We'll need the volume of the box, and the same info about our basis function as before:
volume = boxsize**3 rr_ana, qq_ana = qq_analytic(rmin, rmax, nd, volume, nprojbins, nbins, r_edges, proj_type)
Evaluating qq_analytic (Corrfunc/utils.py)
MIT
example_theory.ipynb
abbyw24/Corrfunc
We also don't need to use the Landy-Szalay estimator (we don't have a DR term!). To get the amplitudes we can just use the naive estimator, $\frac{\text{DD}}{\text{RR}}-1$. In our formulation, the RR term in the demoninator becomes the inverse QQ term, so we have QQ$^{-1}$ $\cdot$ (DD-RR).
numerator = dd_proj - rr_ana amps_ana, *_ = np.linalg.lstsq(qq_ana, numerator, rcond=None) # Use linalg.lstsq instead of actually computing inverse!
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
Now we can go ahead and evaluate the correlation function at our fine separations.
xi_ana = evaluate_xi(nbins, amps_ana, len(r_fine), r_fine, nbins, r_edges, proj_type)
Evaluating xi (Corrfunc/utils.py)
MIT
example_theory.ipynb
abbyw24/Corrfunc
We'll compare this to computing the analytic correlation function with standard Corrfunc:
xi_res = Corrfunc.theory.xi(boxsize, nthreads, r_edges, x, y, z) xi_theory = np.array([x['xi'] for x in xi_res], dtype=float) plt.figure(figsize=(8,5)) plt.plot(r_fine, xi_ana, color='blue', label='Tophat basis') plt.plot(r_avg, xi_theory, marker='o', ls='None', color='grey', label='Standard Estimator') plt.xlabel(r'r ($h^{-1}$Mpc)') plt.ylabel(r'$\xi$(r)') plt.legend()
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
Once again, the standard and continuous correlation functions line up exactly. The correlation function looks smoother, as we didn't have to deal with a non-exact random catalog to estimate the window function. Continuous-function estimator: Cubic spline basis Now we can make things more interesting! Let's choose a cubic spline basis. Luckily, this capability comes with the continuous-function version of Corrfunc!We need to choose the parameters for our spline. Here, we'll choose a cubic spline. If we used a linear spline, we'd get a piecewise function; if we used a zeroth-order spline, we'd recover our tophat bases from above.We'll take the min and max of the same separation values we used, and choose half the number of components as our previous bins (these will be related to the 'knots' in the spline). We'll also need the number of radial bins at which to evaluate our functions; the code will interpolate between these.Then we'll write our basis to a file. For any set of basis functions that is read from a file, 'proj_type' must be set to 'general_r'.
proj_type = 'generalr' kwargs = {'order': 3} # 3: cubic spline projfn = 'quadratic_spline.dat' nprojbins = int(nbins/2) spline.write_bases(rmin, rmax, nprojbins, projfn, ncont=1000, **kwargs)
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
Let's check out the basis functions:
bases = np.loadtxt(projfn) bases.shape r = bases[:,0] plt.figure(figsize=(8,5)) for i in range(1, len(bases[0])): plt.plot(r, bases[:,i], color='red', alpha=0.5) plt.xlabel(r'r ($h^{-1}$Mpc)')
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
The bases on the ends are different so that they have the same normalization. We'll use the analytic version of the estimator, making sure to pass the basis file:
dd_res_spline, dd_spline, _ = DDsmu(1, nthreads, r_edges, mumax, nmubins, x, y, z, boxsize=boxsize, periodic=periodic, proj_type=proj_type, nprojbins=nprojbins, projfn=projfn) volume = boxsize**3 # nbins and r_edges won't be used here because we passed projfn, but they're needed for compatibility. (TODO: fix!) rr_ana_spline, qq_ana_spline = qq_analytic(rmin, rmax, nd, volume, nprojbins, nbins, r_edges, proj_type, projfn=projfn) numerator = dd_spline - rr_ana_spline amps_ana_spline, *_ = np.linalg.lstsq(qq_ana_spline, numerator, rcond=None) # Use linalg.lstsq instead of actually computing inverse! xi_ana_spline = evaluate_xi(nprojbins, amps_ana_spline, len(r_fine), r_fine, nbins, r_edges, proj_type, projfn=projfn)
Evaluating qq_analytic (Corrfunc/utils.py) Evaluating xi (Corrfunc/utils.py)
MIT
example_theory.ipynb
abbyw24/Corrfunc
Let's compare the results:
plt.figure(figsize=(8,5)) plt.plot(r_fine, xi_ana_spline, color='red', label='Cubic spline basis') plt.plot(r_fine, xi_ana, color='blue', label='Tophat basis') plt.plot(r_avg, xi_theory, marker='o', ls='None', color='grey', label='Standard estimator') plt.xlabel(r'r ($h^{-1}$Mpc)') plt.ylabel(r'$\xi$(r)') plt.legend()
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
We can see that the spline basis function produced a completely smooth correlation function; no hard-edged bins! It also captured that baryon acoustic feature (which we expect to be a smooth peak).This basis function is a bit noisy and likely has some non-physical features - but so does the tophat / standard basis! In the next notebook, we'll use a physically motivated basis function. Finally, Remember to clean up the basis function file:
os.remove(projfn)
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
The below ipython magic line will convert this notebook to a regular old python script.
#!jupyter nbconvert --to script example_theory.ipynb
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
Data Analysis - FIB-SEM Datasets* Goal: identify changes occurred across different time points
import os, sys, glob import re import numpy as np import pandas as pd from scipy.stats import ttest_ind import matplotlib.pyplot as plt import pprint
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
01 Compile data into single .csv file for each label
mainpath = 'D:\PerlmutterData' folder = 'segmentation_compiled_export' data_folder = 'data' path = os.path.join(mainpath, folder, data_folder) print(path) folders = ['cell_membrane', 'nucleus', 'mito', 'cristae', 'inclusion', 'ER'] target_list = glob.glob(os.path.join(path, 'compile', '*.csv')) target_list = [os.path.basename(x) for x in target_list] target_list = [os.path.splitext(x)[0] for x in target_list] print(target_list) file_meta = { 'data_d00_batch01_loc01': 0, 'data_d00_batch02_loc02': 0, 'data_d00_batch02_loc03': 0, 'data_d07_batch01_loc01': 7, 'data_d07_batch02_loc01': 7, 'data_d07_batch02_loc02': 7, 'data_d14_batch01_loc01': 14, 'data_d17_batch01_loc01': 17, 'data_d21_batch01_loc01': 21, } for i in folders: file_list = glob.glob(os.path.join(path, 'raw', i, '*.csv')) if not i in target_list: df = pd.DataFrame() for j in file_list: data_temp = pd.read_csv(j, header = 1) filename_tmp = os.path.basename(j) # add filename data_temp['filename'] = filename_tmp # add day filename_noext = os.path.splitext(filename_tmp)[0] pattern = re.compile("data_d[0-9][0-9]_batch[0-9][0-9]_loc[0-9][0-9]") original_filename = pattern.search(filename_noext).group(0) day_tmp = file_meta[original_filename] data_temp['day'] = day_tmp df = df.append(data_temp, ignore_index = True) display(df) df.to_csv(os.path.join(path, 'compile', i + '.csv'))
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
02 Load data 02-01 Calculate mean and tota|l volumn for mito, cristate, ER and inclusion
df_mito = pd.read_csv(os.path.join(path, 'compile', 'mito' + '.csv')) df_mito['Volume3d_µm^3'] = df_mito['Volume3d']/1e9 df_mito['Area3d_µm^2'] = df_mito['Area3d']/1e6 df_mito_sum_grouped = df_mito.groupby(['day', 'filename']).sum().reset_index() df_mito_mean_grouped = df_mito.groupby(['day', 'filename']).mean().reset_index() df_cristae = pd.read_csv(os.path.join(path, 'compile', 'cristae' + '.csv')) df_cristae['Volume3d_µm^3'] = df_cristae['Volume3d']/1e9 df_cristae['Area3d_µm^2'] = df_cristae['Area3d']/1e6 df_cristae_sum_grouped = df_cristae.groupby(['day', 'filename']).sum().reset_index() df_cristae_mean_grouped = df_cristae.groupby(['day', 'filename']).mean().reset_index() df_ER = pd.read_csv(os.path.join(path, 'compile', 'ER' + '.csv')) df_ER['Volume3d_µm^3'] = df_ER['Volume3d']/1e9 df_ER['Area3d_µm^2'] = df_ER['Area3d']/1e6 df_ER_sum_grouped = df_ER.groupby(['day', 'filename']).sum().reset_index() df_ER_mean_grouped = df_ER.groupby(['day', 'filename']).mean().reset_index() df_inclusion = pd.read_csv(os.path.join(path, 'compile', 'inclusion' + '.csv')) df_inclusion['Volume3d_µm^3'] = df_inclusion['Volume3d']/1e9 df_inclusion['Area3d_µm^2'] = df_inclusion['Area3d']/1e6 df_inclusion_sum_grouped = df_inclusion.groupby(['day', 'filename']).sum().reset_index() df_inclusion_mean_grouped = df_inclusion.groupby(['day', 'filename']).mean().reset_index()
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
02-02 Calculate the total volume for cell membrane and nucleus
df_nucleus = pd.read_csv(os.path.join(path, 'compile', 'nucleus' + '.csv')) df_nucleus['Volume3d_µm^3'] = df_nucleus['Volume3d']/1e9 df_nucleus['Area3d_µm^2'] = df_nucleus['Area3d']/1e6 df_nucleus_sum_grouped = df_nucleus.groupby(['day', 'filename']).sum().reset_index() df_cell_membrane = pd.read_csv(os.path.join(path, 'compile', 'cell_membrane' + '.csv')) df_cell_membrane['Volume3d_µm^3'] = df_cell_membrane['Volume3d']/1e9 df_cell_membrane['Area3d_µm^2'] = df_cell_membrane['Area3d']/1e6 df_cell_membrane_sum_grouped = df_cell_membrane.groupby(['day', 'filename']).sum().reset_index() df_cell_membrane_sum_grouped
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
02-03 Calculate the volume of cytoplasm
df_cyto = pd.DataFrame() df_cyto['filename'] = df_cell_membrane_sum_grouped['filename'] df_cyto['Volume3d_µm^3'] = df_cell_membrane_sum_grouped['Volume3d_µm^3'] - df_nucleus_sum_grouped['Volume3d_µm^3'] display(df_cyto)
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
02-03 Omit unhealthy data or data with poor quality
omit_data = ['data_d00_batch02_loc02', 'data_d17_batch01_loc01_01', 'data_d17_batch01_loc01_02'] for omit in omit_data: df_mito = df_mito.loc[df_mito['filename']!= omit+ '_mito.csv'] df_mito_sum_grouped = df_mito_sum_grouped.loc[df_mito_sum_grouped['filename']!=omit+ '_mito.csv'] df_mito_mean_grouped = df_mito_mean_grouped.loc[df_mito_mean_grouped['filename']!=omit+ '_mito.csv'] df_cristae = df_cristae.loc[df_cristae['filename']!=omit+ '_cristae.csv'] df_cristae_sum_grouped = df_cristae_sum_grouped.loc[df_cristae_sum_grouped['filename']!=omit+ '_cristae.csv'] df_cristae_mean_grouped = df_cristae_mean_grouped.loc[df_cristae_mean_grouped['filename']!=omit+ '_cristae.csv'] df_ER = df_ER.loc[df_ER['filename']!=omit+ '_ER.csv'] df_ER_sum_grouped = df_ER_sum_grouped.loc[df_ER_sum_grouped['filename']!=omit+ '_ER.csv'] df_ER_mean_grouped = df_ER_mean_grouped.loc[df_ER_mean_grouped['filename']!=omit+ '_ER.csv'] df_inclusion = df_inclusion.loc[df_inclusion['filename']!=omit+'_inclusion.csv'] df_inclusion_sum_grouped = df_inclusion_sum_grouped.loc[df_inclusion_sum_grouped['filename']!=omit+'_inclusion.csv'] df_inclusion_mean_grouped = df_inclusion_mean_grouped.loc[df_inclusion_mean_grouped['filename']!=omit+'_inclusion.csv'] df_nucleus = df_nucleus.loc[df_nucleus['filename']!=omit+'_nucleus.csv'] df_nucleus_sum_grouped = df_nucleus_sum_grouped.loc[df_nucleus_sum_grouped['filename']!=omit+'_nucleus.csv'] df_cell_membrane = df_cell_membrane.loc[df_cell_membrane['filename']!=omit+'_cell_membrane.csv'] df_cell_membrane_sum_grouped = df_cell_membrane_sum_grouped.loc[df_cell_membrane_sum_grouped['filename']!=omit+'_cell_membrane.csv'] df_cyto = df_cyto.loc[df_cyto['filename']!=omit+'_cell_membrane.csv'] df_mito = df_mito.reset_index(drop=True) df_mito_sum_grouped = df_mito_sum_grouped.reset_index(drop=True) df_mito_mean_grouped = df_mito_mean_grouped.reset_index(drop=True) df_cristae = df_cristae.reset_index(drop=True) df_cristae_sum_grouped = df_cristae_sum_grouped.reset_index(drop=True) df_cristae_mean_grouped = df_cristae_mean_grouped.reset_index(drop=True) df_ER = df_ER.reset_index(drop=True) df_ER_sum_grouped = df_ER_sum_grouped.reset_index(drop=True) df_ER_mean_grouped = df_ER_mean_grouped.reset_index(drop=True) df_inclusion = df_inclusion.reset_index(drop=True) df_inclusion_sum_grouped = df_inclusion_sum_grouped.reset_index(drop=True) df_inclusion_mean_grouped = df_inclusion_mean_grouped.reset_index(drop=True) df_nucleus = df_nucleus.reset_index(drop=True) df_nucleus_sum_grouped = df_nucleus_sum_grouped.reset_index(drop=True) df_cell_membrane = df_cell_membrane.reset_index(drop=True) df_cell_membrane_sum_grouped = df_cell_membrane_sum_grouped.reset_index(drop=True) df_cyto = df_cyto.reset_index(drop=True) df_mito.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'mito.csv')) df_mito_sum_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'mito_sum_volume.csv')) df_mito_mean_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'mito_mean_volume.csv')) df_cristae.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'cristae.csv')) df_cristae_sum_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'cristae_sum_volume.csv')) df_cristae_mean_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'cristae_mean_volume.csv')) df_ER.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'ER.csv')) df_ER_sum_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'ER_sum_volume.csv')) df_ER_mean_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'ER_mean_volume.csv')) df_inclusion.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'inclusion.csv')) df_inclusion_sum_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'inclusion_sum_volume.csv')) df_inclusion_mean_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'inclusion_mean_volume.csv')) df_nucleus.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'nucleus.csv')) df_nucleus_sum_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'nucleus_sum_volume.csv')) df_cell_membrane.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'cell_membrane_volume.csv')) df_cell_membrane_sum_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'cell_membrane_sum_volume.csv')) df_cyto.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'cytoplasm_sum_volume.csv')) df_mito_sum_grouped
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
02-04 Compile total volume of mito, cristate, ER and inclusion into one table1. raw value2. normalized by the total volume of cytoplasm
df_sum_compiled = pd.DataFrame() df_sum_compiled[['filename', 'day']] = df_cell_membrane_sum_grouped[['filename', 'day']] df_sum_compiled['day'] = df_sum_compiled['day'].astype('int8') df_sum_compiled[['mito_Volume3d_µm^3', 'mito_Area3d_µm^2']] = df_mito_sum_grouped[['Volume3d_µm^3', 'Area3d_µm^2']] df_sum_compiled[['cristae_Volume3d_µm^3', 'cristae_Area3d_µm^2']] = df_cristae_sum_grouped[['Volume3d_µm^3', 'Area3d_µm^2']] df_sum_compiled[['ER_Volume3d_µm^3', 'ER_Area3d_µm^2']] = df_ER_sum_grouped[['Volume3d_µm^3', 'Area3d_µm^2']] df_inclusion_sum_tmp = df_inclusion_sum_grouped[['Volume3d_µm^3', 'Area3d_µm^2']] df_inclusion_sum_fill = pd.DataFrame([[0, 0]], columns = ['Volume3d_µm^3', 'Area3d_µm^2']) df_inclusion_sum_tmp = df_inclusion_sum_fill.append(df_inclusion_sum_tmp, ignore_index = True) df_sum_compiled[['inclusion_Volume3d_µm^3', 'inclusion_Area3d_µm^2']] = df_inclusion_sum_tmp df_sum_compiled fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(10, 10)) idx = 0 for i in range(2): for j in range(2): ax[i, j].bar(df_sum_compiled.index, df_sum_compiled.iloc[:, idx +2], tick_label=['0', '0', '7', '7', '7', '14', '21']) ax[i, j].set_title(df_sum_compiled.columns[idx+2]) ax[i, j].set_xlabel('Day') idx += 1 # ax[i].set_ylabel('Total Volume ($µm^3$)') fig.tight_layout(pad=3.0) mainpath = 'D:\PerlmutterData' folder = 'segmentation_compiled_export' data_folder = 'data' plt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'mito_cristae_totoal_volume_area.png')) plt.show() fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(10, 10)) idx = 4 for i in range(2): for j in range(2): ax[i, j].bar(df_sum_compiled.index, df_sum_compiled.iloc[:, idx +2], tick_label=['0', '0', '7', '7', '7', '14', '21']) ax[i, j].set_title(df_sum_compiled.columns[idx+2]) ax[i, j].set_xlabel('Day') idx += 1 # ax[i].set_ylabel('Total Volume ($µm^3$)') fig.tight_layout(pad=3.0) plt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'ER_inclusion_totoal_volume_area.png')) plt.show() df_sum_compiled_normalized = pd.DataFrame() df_sum_compiled_normalized[['filename', 'day']] = df_cell_membrane_sum_grouped[['filename', 'day']] cal_tmp = df_sum_compiled.iloc[:, 2:].div(df_cyto['Volume3d_µm^3'], axis=0) df_sum_compiled_normalized = pd.concat([df_sum_compiled_normalized, cal_tmp], axis=1) df_sum_compiled_normalized fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(10, 10)) idx = 0 for i in range(2): for j in range(2): ax[i, j].bar(df_sum_compiled_normalized.index, df_sum_compiled_normalized.iloc[:, idx +2], tick_label=['0', '0', '7', '7', '7', '14', '21']) ax[i, j].set_title(df_sum_compiled_normalized.columns[idx+2]) ax[i, j].set_xlabel('Day') idx += 1 # ax[i].set_ylabel('Total Volume ($µm^3$)') fig.tight_layout(pad=3.0) plt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'mito_cristae_normalized_totoal_volume_area.png')) plt.show() fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(10, 10)) idx = 4 for i in range(2): for j in range(2): ax[i, j].bar(df_sum_compiled_normalized.index, df_sum_compiled_normalized.iloc[:, idx +2], tick_label=['0', '0', '7', '7', '7', '14', '21']) ax[i, j].set_title(df_sum_compiled_normalized.columns[idx+2]) ax[i, j].set_xlabel('Day') idx += 1 # ax[i].set_ylabel('Total Volume ($µm^3$)') fig.tight_layout(pad=3.0) plt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'ER_inclusion_normalized_totoal_volume_area.png')) plt.show()
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
02-05 Compile mean volume of mito, cristate, ER and inclusion into one table1. raw value2. normalized by the total volume of cytoplasm
df_mean_compiled = pd.DataFrame() df_mean_compiled[['filename', 'day']] = df_cell_membrane_sum_grouped[['filename', 'day']] df_mean_compiled['day'] = df_mean_compiled['day'].astype('int8') df_mean_compiled[['mito_Volume3d_µm^3', 'mito_Area3d_µm^2']] = df_mito_mean_grouped[['Volume3d_µm^3', 'Area3d_µm^2']] df_mean_compiled[['cristae_Volume3d_µm^3', 'cristae_Area3d_µm^2']] = df_cristae_mean_grouped[['Volume3d_µm^3', 'Area3d_µm^2']] df_mean_compiled[['ER_Volume3d_µm^3', 'ER_Area3d_µm^2']] = df_ER_mean_grouped[['Volume3d_µm^3', 'Area3d_µm^2']] df_inclusion_mean_tmp = df_inclusion_mean_grouped[['Volume3d_µm^3', 'Area3d_µm^2']] df_inclusion_mean_fill = pd.DataFrame([[0, 0]], columns = ['Volume3d_µm^3', 'Area3d_µm^2']) df_inclusion_mean_tmp = df_inclusion_mean_fill.append(df_inclusion_mean_tmp, ignore_index = True) df_mean_compiled[['inclusion_Volume3d_µm^3', 'inclusion_Area3d_µm^2']] = df_inclusion_mean_tmp df_mean_compiled fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(10, 10)) idx = 0 for i in range(2): for j in range(2): ax[i, j].bar(df_mean_compiled.index, df_mean_compiled.iloc[:, idx +2], tick_label=['0', '0', '7', '7', '7', '14', '21']) ax[i, j].set_title(df_mean_compiled.columns[idx+2]) ax[i, j].set_xlabel('Day') idx += 1 # ax[i].set_ylabel('Total Volume ($µm^3$)') fig.tight_layout(pad=3.0) plt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'mito_cristae_mean_volume_area.png')) plt.show() fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(10, 10)) idx = 4 for i in range(2): for j in range(2): ax[i, j].bar(df_mean_compiled.index, df_mean_compiled.iloc[:, idx +2], tick_label=['0', '0', '7', '7', '7', '14', '21']) ax[i, j].set_title(df_mean_compiled.columns[idx+2]) ax[i, j].set_xlabel('Day') idx += 1 # ax[i].set_ylabel('Total Volume ($µm^3$)') fig.tight_layout(pad=3.0) plt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'ER_inclusion_mean_volume_area.png')) plt.show() ''' df_mean_compiled_normalized = pd.DataFrame() df_mean_compiled_normalized[['filename', 'day']] = df_cell_membrane_sum_grouped[['filename', 'day']] cal_tmp = df_mean_compiled.iloc[:, 2:].div(df_cyto['Volume3d_µm^3'], axis=0) df_mean_compiled_normalized = pd.concat([df_mean_compiled_normalized, cal_tmp], axis=1) df_mean_compiled_normalized ''' ''' fig, ax = plt.subplots(nrows=8, ncols=1, figsize=(5, 30)) for i in range(8): ax[i].bar(df_mean_compiled_normalized.index, df_mean_compiled_normalized.iloc[:, i +2], tick_label=['0', '0', '0', '7', '7', '7', '14', '17', '17', '21']) ax[i].set_title(df_mean_compiled_normalized.columns[i+2]) ax[i].set_xlabel('Day') fig.tight_layout(pad=3.0) plt.show() '''
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
02-06 Distribution
# mito maxval = df_mito['Volume3d_µm^3'].max() minval = df_mito['Volume3d_µm^3'].min() print(maxval) print(minval) bins = np.linspace(minval + (maxval - minval)* 0, minval + (maxval - minval)* 1, num = 100) # bins = np.linspace(500000000, minval + (maxval - minval)* 1, num = 50) days = [0, 7, 14, 21] nrows = 4 ncols = 1 fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(10, 15)) for i, day in enumerate(days): df_tmp = df_mito.loc[df_mito['day'] == day, :] axes[i%nrows].hist(df_tmp['Volume3d_µm^3'], bins= bins, log=True, density = True) axes[i%nrows].set_xlim([0, maxval]) axes[i%nrows].set_ylim([0, 1]) axes[i%nrows].set_title('Day ' + str(day)) plt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'distribution_mito_volume.png')) plt.show() # cristae maxval = df_cristae['Area3d_µm^2'].max() minval = df_cristae['Area3d_µm^2'].min() print(maxval) print(minval) bins = np.linspace(minval + (maxval - minval)* 0, minval + (maxval - minval)* 1, num = 100) # bins = np.linspace(500000000, minval + (maxval - minval)* 1, num = 50) days = [0, 7, 14, 21] nrows = 4 ncols = 1 fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(10, 15)) for i, day in enumerate(days): df_tmp = df_cristae.loc[df_cristae['day'] == day, :] axes[i%nrows].hist(df_tmp['Area3d_µm^2'], bins= bins, log=True, density = True) axes[i%nrows].set_xlim([0, maxval]) axes[i%nrows].set_ylim([0, 0.1]) axes[i%nrows].set_title('Day ' + str(day)) plt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'distribution_cristae_volume.png')) plt.show() # ER maxval = df_ER['Volume3d_µm^3'].max() minval = df_ER['Volume3d_µm^3'].min() print(maxval) print(minval) factor = 0.03 bins = np.linspace(minval + (maxval - minval)* 0, minval + (maxval - minval)* factor, num = 100) days = [0, 7, 14, 21] nrows = 4 ncols = 1 fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(10, 15)) for i, day in enumerate(days): df_tmp = df_ER.loc[df_ER['day'] == day, :] axes[i%nrows].hist(df_tmp['Volume3d_µm^3'], bins= bins, log=True, density = True) axes[i%nrows].set_xlim([0, minval + (maxval - minval)* factor]) axes[i%nrows].set_ylim([0, 100]) axes[i%nrows].set_title('Day ' + str(day)) plt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'distribution_ER_volume.png')) plt.show() # inclusion maxval = df_inclusion['Volume3d_µm^3'].max() minval = df_inclusion['Volume3d_µm^3'].min() print(maxval) print(minval) factor = 1 bins = np.linspace(minval + (maxval - minval)* 0, minval + (maxval - minval)* factor, num = 100) days = [0, 7, 14, 21] nrows = 4 ncols = 1 fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(10, 15)) for i, day in enumerate(days): df_tmp = df_inclusion.loc[df_inclusion['day'] == day, :] axes[i%nrows].hist(df_tmp['Volume3d_µm^3'], bins= bins, log=True, density = True) axes[i%nrows].set_xlim([0, minval + (maxval - minval)* factor]) axes[i%nrows].set_ylim([0, 1]) axes[i%nrows].set_title('Day ' + str(day)) plt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'distribution_inclusion_volume.png')) plt.show()
261.036 1e-06
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
03 Load Data from Auto Skeletonization of Mitocondria 03-01
mainpath = 'D:\PerlmutterData' folder = 'segmentation_compiled_export' data_folder = 'data' path = os.path.join(mainpath, folder, data_folder) print(path) folders = ['skeleton_output'] subcat = ['nodes', 'points', 'segments_s'] target_list = glob.glob(os.path.join(path, 'compile', '*.csv')) target_list = [os.path.basename(x) for x in target_list] target_list = [os.path.splitext(x)[0] for x in target_list] print(target_list) file_meta = { 'data_d00_batch01_loc01': 0, 'data_d00_batch02_loc02': 0, 'data_d00_batch02_loc03': 0, 'data_d07_batch01_loc01': 7, 'data_d07_batch02_loc01': 7, 'data_d07_batch02_loc02': 7, 'data_d14_batch01_loc01': 14, 'data_d17_batch01_loc01': 17, 'data_d21_batch01_loc01': 21, } for i in subcat: file_list = glob.glob(os.path.join(path, 'raw', 'skeleton_output', '*', i + '.csv')) # print(file_list) if not i in target_list: df = pd.DataFrame() for j in file_list: data_temp = pd.read_csv(j, header = 0) foldername_tmp = os.path.dirname(j) foldername_tmp = os.path.basename(foldername_tmp) # add day pattern = re.compile("data_d[0-9][0-9]_batch[0-9][0-9]_loc[0-9][0-9]") original_foldername = pattern.search(foldername_tmp).group(0) day_tmp = file_meta[original_foldername] data_temp['day'] = day_tmp # add filename data_temp['filename'] = original_foldername df = df.append(data_temp, ignore_index = True) display(df) df.to_csv(os.path.join(path, 'compile', i + '.csv')) df_points = pd.read_csv(os.path.join(path, 'compile', 'points' + '.csv')) df_segments = pd.read_csv(os.path.join(path, 'compile', 'segments_s' + '.csv')) df_nodes = pd.read_csv(os.path.join(path, 'compile', 'nodes' + '.csv')) df_points for omit in omit_data: df_points = df_points.loc[df_points['filename']!= omit] df_segments = df_segments.loc[df_segments['filename']!=omit] df_nodes = df_nodes.loc[df_nodes['filename']!=omit] # points maxval = df_points['thickness'].max() minval = df_points['thickness'].min() days = [0, 7, 14, 21] bins = np.linspace(minval + (maxval - minval)* 0, minval + (maxval - minval)* 1, num = 20) nrows = 4 ncols = 1 fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(10, 15)) for i, day in enumerate(days): df_tmp = df_points.loc[df_points['day'] == day, :] axes[i%nrows].hist(df_tmp['thickness'], bins= bins, log=False, density = True) axes[i%nrows].set_ylim([0, 0.008]) axes[i%nrows].set_title('Day ' + str(day)) plt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'distribution_points_thickness.png')) plt.show() # segments maxval = df_segments['thickness'].max() minval = df_segments['thickness'].min() days = [0, 7, 14, 21] bins = np.linspace(minval + (maxval - minval)* 0, minval + (maxval - minval)* 1, num = 20) nrows = 4 ncols = 1 fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(10, 15)) for i, day in enumerate(days): df_tmp = df_segments.loc[df_segments['day'] == day, :] axes[i%nrows].hist(df_tmp['thickness'], bins= bins, log=False, density = True) axes[i%nrows].set_ylim([0, 0.008]) axes[i%nrows].set_title('Day ' + str(day)) plt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'distribution_segments_thickness.png')) plt.show() df_segments_count_grouped = df_nodes.groupby(['day', 'filename', 'Coordination Number']).count().reset_index() df_segments_count_grouped filename = df_segments_count_grouped['filename'].unique() print(filename) days = [0, 7, 14, 21] nrows = 4 ncols = 1 fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(10, 15)) for i, day in enumerate(days): df_tmp = df_segments_count_grouped.loc[df_segments_count_grouped['day'] == day] x = df_tmp['Coordination Number'] y = df_tmp['Node ID'] x_pos = [str(i) for i in x] axes[i%nrows].bar(x_pos, y) axes[i%nrows].set_title('Day ' + str(day)) plt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'coordination_number.png')) plt.show()
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
04 Average of total size
mito_mean = df_mito_sum_grouped[['day', 'Volume3d_µm^3']].groupby(['day']).mean().reset_index() mito_sem = df_mito_sum_grouped[['day', 'Volume3d_µm^3']].groupby(['day']).sem().reset_index() mito_sem = mito_sem.fillna(0) mito_mean.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'mito_mean.csv')) mito_sem.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'mito_sem.csv')) fig = plt.figure(figsize=(5, 5)) x = ['Day 0', 'Day 7', 'Day 14', 'Day 21'] plt.bar(x, mito_mean['Volume3d_µm^3'], yerr= mito_sem['Volume3d_µm^3'], error_kw=dict(capsize=10)) plt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'mito_mean_barplot.png')) plt.show() cristae_mean = df_cristae_sum_grouped[['day', 'Volume3d_µm^3']].groupby(['day']).mean().reset_index() cristae_sem = df_cristae_sum_grouped[['day', 'Volume3d_µm^3']].groupby(['day']).sem().reset_index() cristae_sem = cristae_sem.fillna(0) cristae_mean.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'cristae_mean.csv')) cristae_sem.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'cristae_sem.csv')) fig = plt.figure(figsize=(5, 5)) x = ['Day 0', 'Day 7', 'Day 14', 'Day 21'] plt.bar(x, cristae_mean['Volume3d_µm^3'], yerr= cristae_sem['Volume3d_µm^3'], error_kw=dict(capsize=10)) plt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'cristae_mean_barplot.png')) plt.show() ER_mean = df_ER_sum_grouped[['day', 'Volume3d_µm^3']].groupby(['day']).mean().reset_index() ER_sem = df_ER_sum_grouped[['day', 'Volume3d_µm^3']].groupby(['day']).sem().reset_index() ER_sem = ER_sem.fillna(0) ER_mean.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'ER_mean.csv')) ER_sem.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'ER_sem.csv')) fig = plt.figure(figsize=(5, 5)) x = ['Day 0', 'Day 7', 'Day 14', 'Day 21'] plt.bar(x, ER_mean['Volume3d_µm^3'], yerr= ER_sem['Volume3d_µm^3'], error_kw=dict(capsize=10)) plt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'ER_mean_barplot.png')) plt.show() inclusion_mean = df_inclusion_sum_grouped[['day', 'Volume3d_µm^3']].groupby(['day']).mean().reset_index() inclusion_sem = df_inclusion_sum_grouped[['day', 'Volume3d_µm^3']].groupby(['day']).sem().reset_index() inclusion_sem = inclusion_sem.fillna(0) inclusion_mean.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'inclusion_mean.csv')) inclusion_sem.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'inclusion_sem.csv')) fig = plt.figure(figsize=(5, 5)) x = ['Day 0', 'Day 7', 'Day 14', 'Day 21'] plt.bar(x, inclusion_mean['Volume3d_µm^3'], yerr= inclusion_sem['Volume3d_µm^3'], error_kw=dict(capsize=10)) plt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'inclusion_mean_barplot.png')) plt.show()
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
Harvesting data from HomeThis is an example of how my original recipe for [harvesting data from The Bulletin](Harvesting-data-from-the-Bulletin.ipynb) can be modified for other journals.If you'd like a pre-harvested dataset of all the Home covers (229 images in a 3.3gb zip file), open this link using your preferred BitTorrent client:
# Let's import the libraries we need. import requests from bs4 import BeautifulSoup import time import json import os import re # Create a directory for this journal # Edit as necessary for a new journal data_dir = '../../data/Trove/Home' os.makedirs(data_dir, exist_ok=True)
_____no_output_____
MIT
Trove/Cookbook/Harvesting-data-from-the-Home.ipynb
wragge/ozglam-workbench
Getting the issue dataEach issue of a digitised journal like has it's own unique identifier. You've probably noticed them in the urls of Trove resources. They look something like this `nla.obj-362409353`. Once we have the identifier for an issue we can easily download the contents, but how do we get a complete list of identifiers?The [harvesting data from the Bulletin](Harvesting-data-from-the-Bulletin.ipynb) notebook explains how we can find a url that lists all the available issues of a journal.This is the url we need to start harvesting issue metadata about *Home*. You could easily modify this to get metadata from another journal by changing the identifier.```https://nla.gov.au/nla.obj-362409353/browse?startIdx=0&rows=20&op=c```
# This is just the url we found above, with a slot into which we can insert the startIdx value # If you want to download data from another journal, just change the nla.obj identifier to point to the journal. start_url = 'https://nla.gov.au/nla.obj-362409353/browse?startIdx={}&rows=20&op=c' # The initial startIdx value start = 0 # Number of results per page n = 20 issues = [] # If there aren't 20 results on the page then we've reached the end, so continue harvesting until that happens. while n == 20: # Get the browse page response = requests.get(start_url.format(start)) # Beautifulsoup turns the HTML into an easily navigable structure soup = BeautifulSoup(response.text, 'lxml') # Find all the divs containing issue details and loop through them details = soup.find_all(class_='l-item-info') for detail in details: issue = {} # Get the issue id issue['id'] = detail.dt.a.string rows = detail.find_all('dd') # Get the issue details issue['details'] = rows[2].p.string # Get the number of pages issue['pages'] = re.search(r'^(\d+)', detail.find('a', class_="browse-child").text, flags=re.MULTILINE).group(1) issues.append(issue) print(issue) time.sleep(0.2) # Increment the startIdx start += n # Set n to the number of results on the current page n = len(details) len(issues) # Save the harvested results as a JSON file in case we need them later on with open('{}/home_issues.json'.format(data_dir), 'w') as outfile: json.dump(issues, outfile) # Open the saved JSON file with open('{}/home_issues.json'.format(data_dir), 'r') as infile: issues = json.load(infile)
_____no_output_____
MIT
Trove/Cookbook/Harvesting-data-from-the-Home.ipynb
wragge/ozglam-workbench
Cleaning up the metadataSo far we've just grabbed the complete issue details as a single string. It would be good to parse this string so that we have the dates, volume and issue numbers in separate fields. As is always the case, there's a bit of variation in the way this information is recorded. The code below tries out different combinations and then saves the structured data in a Python list.I had to modify the code I used with the *Bulletin* due to slight variations in the way the issue data was recorded. For example, issue dates for *Home* use the full names of months, while the *Bulletin* records used abbreviations. It's likely that there will be other variations between journals, so you might have to adjust this code.
import arrow from arrow.parser import ParserError issues_data = [] # Loop through the issues for issue in issues: issue_data = {} issue_data['id'] = issue['id'] issue_data['pages'] = int(issue['pages']) print(issue['details']) try: # This pattern looks for details in the form: Vol. 2 No. 3 (2 Jul 1878) details = re.search(r'(.*)Vol. (\d+) No\.* (\d+) \((.+)\)', issue['details'].strip()) issue_data['label'] = details.group(1).strip() issue_data['volume'] = details.group(2) issue_data['number'] = details.group(3) date = details.group(4) except AttributeError: try: # This pattern looks for details in the form: No. 3 (2 Jul 1878) details = re.search(r'No. (\d+) \((.+)\)', issue['details'].strip()) issue_data['label'] = '' issue_data['volume'] = '' issue_data['number'] = details.group(1) date = details.group(2) except AttributeError: try: # This pattern looks for details in the form: Bulletin Christmas Edition (2 Jul 1878) details = re.search(r'(.*) \((.+)\)', issue['details'].strip()) issue_data['label'] = details.group(1) issue_data['volume'] = '' issue_data['number'] = '' date = details.group(2) except AttributeError: # This pattern looks for details in the form: Bulletin 1878 Jul 3 details = re.search(r'Bulletin (.+)', issue['details'].strip()) date_str = details.group(1) # Date is wrong way round, split and reverse date = ' '.join(reversed(date_str.split())) issue_data['label'] = '' issue_data['volume'] = '' issue_data['number'] = '' # Normalise months date = date.replace('Sept', 'Sep').replace('Sepember', 'September').replace('July August', 'July').replace('September October', 'September').replace(' ', ' ') # Convert date to ISO format try: issue_data['date'] = arrow.get(date, 'D MMMM YYYY').isoformat()[:-15] except ParserError: issue_data['date'] = arrow.get(date, 'D MMM YYYY').isoformat()[:-15] issues_data.append(issue_data)
_____no_output_____
MIT
Trove/Cookbook/Harvesting-data-from-the-Home.ipynb
wragge/ozglam-workbench
Save as CSVNow the issues data is in a nice, structured form, we can load it into a Pandas dataframe. This allows us to do things like find the total number of pages digitised.We can also save the metadata as a CSV.
import pandas as pd # Convert issues metadata into a dataframe df = pd.DataFrame(issues_data, columns=['id', 'label', 'volume', 'number', 'date', 'pages']) # Find the total number of pages df['pages'].sum() # Save metadata as a CSV. df.to_csv('{}/home_issues.csv'.format(data_dir), index=False)
_____no_output_____
MIT
Trove/Cookbook/Harvesting-data-from-the-Home.ipynb
wragge/ozglam-workbench
Download front coversOptions for downloading images, PDFs and text are described in the [harvesting data from the Bulletin](Harvesting-data-from-the-Bulletin.ipynb) notebook. In this recipe we'll just download the fromt covers (because they're awesome).The code below checks to see if an image has already been saved before downloading it, so if the process is interrupted you can just run it again to pick up where it stopped. If more issues are added to Trove you could run it again to pick up any new images.
import zipfile import io # Prepare a directory to save the images into output_dir = data_dir + '/images' os.makedirs(output_dir, exist_ok=True) # Loop through the issue metadata for issue in issues_data: print(issue['id']) id = issue['id'] # Check to see if the first page of this issue has already been downloaded if not os.path.exists('{}/{}-1.jpg'.format(output_dir, id)): url = 'https://nla.gov.au/{}/download?downloadOption=zip&firstPage=0&lastPage=0'.format(id) # Get the file r = requests.get(url) # The image is in a zip, so we need to extract the contents into the output directory z = zipfile.ZipFile(io.BytesIO(r.content)) z.extractall(output_dir) time.sleep(1)
_____no_output_____
MIT
Trove/Cookbook/Harvesting-data-from-the-Home.ipynb
wragge/ozglam-workbench
"Old Skool Image Classification"> "A blog on how to manuallly create features from an Image for classification task."- toc: true- branch: master- badges: true- comments: false- categories: [CV, image classification, feature engineering, pyTorch, CIFAR10]- image: images/blog1.png- hide: false- search_exclude: true IntroductionThe objective of the current notebook is to give a glimpse of some of the methods for feature extraction that were prevelent before the advent of Deep Neural Networks in the Computer Vision domain.In the current Notebook we shall see how to do the same using the Python language with CIFAR10 dataset. The goal is to extract several features from the provided images and finally perform Image Classification using a Multi Layer Perceptron. Texture AnalysisOn the preface of the book _Image Processing: Dealing with Textures_ {% fn 1 %}, the authors provided a very captivating definition of Texture: __Texture is what makes life beautiful; texture is what makes life interesting and texture is what makes life possible. Texture is what makes Mozart’s music beautiful, the masterpieces of the art of the Renaissance classical and the facades of Barcelona’s buildings attractive."__.(Not so helpful eh!) So, what is Texture? Technically, its the 'variation of Data on a smaller scale than the scale of interest'.Classical Visual Computing comprises of two main branches when it comes to analyzing Texture of an Image [[1]](1):1. Structural - Local Binary Pattern - Gabor Wavelets - Fourier Co-efficients2. Statistical - Co-Occurance Matrix - Orientation Histogram Workflow- We shall start by downloading the [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset.- Next, we manually craft the features to obtain Texture Metrics.- After creating functions to obtain textual features from an image, we create a loop to extract the same from all the images in Training and Test dataset.- Next, we save the Training and Test set extracted features as serialized file. - Then we shall use the created features as co-variates against the label for each image and train a Softmax classifier on the Training Set. - Eventually, we evaluate the classifier on the Test set.
%matplotlib inline from torchvision import datasets import PIL from skimage.feature import local_binary_pattern, greycomatrix, greycoprops from skimage.filters import gabor import torch from torch import nn from torch.utils.data import TensorDataset, DataLoader import torch.nn.functional as F import numpy as np import matplotlib.pyplot as plt import tqdm from tqdm import notebook from pathlib import Path import pickle import time
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Data Loading
#collapse #collapse-output trainDset = datasets.CIFAR10(root="./cifar10/", train=True, download=True) testDset = datasets.CIFAR10(root = "./cifar10/", train=False, download=True) # Looking at a single image #collapse img = trainDset[0][0] # PIL Image img_grey = img.convert('L') # convert to Grey-scale img_arr = np.array(img_grey) # convert to numpy array plt.imshow(img)
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Local Binary Patterns(LBP)LBP is helpful in extracting "local" structure of the image. It does so by encoding the local neighbourhood after they have been maximally simplified, i.e. binarized. In case, we want to perform LBP on a coloured image, we need to do so individually on each channel(Red/Blue/Green).
#collapse feat_lbp = local_binary_pattern(img_arr, 8,1,'uniform') feat_lbp = np.uint8( (feat_lbp/feat_lbp.max())*255) # converting to unit 8 lbp_img = PIL.Image.fromarray(feat_lbp) # Convert from array plt.imshow(lbp_img, cmap = 'gray') # Energy, Entropy def get_lbp(img): """Function to implement Local Binary Pattern""" lbp_hist, _ = np.histogram(img, 8) lbp_hist = np.array(lbp_hist, dtype = float) lbp_prob = np.divide(lbp_hist, np.sum(lbp_hist)) lbp_prob = np.where(np.isclose(0, lbp_prob), 0.0000000001, lbp_prob) # to avoid log(0) lbp_energy = np.sum(lbp_prob**2) lbp_entropy = -np.sum(np.multiply(lbp_prob, np.log2(lbp_prob))) return lbp_energy, lbp_entropy
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Co Occurence MatrixIntiutively, if we were to extract information of a pixel in an image and also record its neighbouring pixels and their intensities, we will be able to capture both spatial and relative information. This is where Co-Occurance matrix are useful. They extract the representation of joint probability of chosen set of pixels having certain values. Once we have the co-occurance matrix, we can start calculating the feature matrics such as:- $\textbf{Energy} = \sum_{m=0}^{G-1}\sum_{n=0}^{G-1}p^2\left(m,n\right)$- $\textbf{Entropy} = \sum_{m=0}^{G-1}\sum_{n=0}^{G-1}p\left(m,n\right)\cdot log \left(p\left(m,n\right)\right)$- $\textbf{Contrast} = \frac{1}{(G-1)^2}\sum_{m=0}^{G-1}\sum_{n=0}^{G-1}(m-n)^2\cdot p(m,n)$- $\textbf{Homogeneity} = \sum_{m=0}^{G-1}\sum_{n=0}^{G-1} \frac{p(m,n)}{1+|m-n|}$Where, $m,n$ are the neighbouring pixels and $G$ is the total number of grey levels we use. $G=256$ for an 8-bit gray-scale image
def creat_cooccur(img_arr, *args, **kwargs): """Implements extraction of features from Co-Occurance Matrix""" gCoMat = greycomatrix(img_arr, [2], [0], 256, symmetric=True, normed=True) contrast = greycoprops(gCoMat, prop='contrast') dissimilarity = greycoprops(gCoMat, prop='dissimilarity') homogeneity = greycoprops(gCoMat, prop='homogeneity') energy = greycoprops(gCoMat, prop='energy') correlation = greycoprops(gCoMat, prop = 'correlation') return contrast[0][0], dissimilarity[0][0], homogeneity[0][0], energy[0][0], correlation[0][0] #collapse gCoMat = greycomatrix(img_arr, [2], [0], 256, symmetric=True, normed=True) contrast = greycoprops(gCoMat, prop='contrast') dissimilarity = greycoprops(gCoMat, prop='dissimilarity') homogeneity = greycoprops(gCoMat, prop='homogeneity') energy = greycoprops(gCoMat, prop='energy') correlation = greycoprops(gCoMat, prop = 'correlation') print(energy[0][0])
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
[Gabor Filter](https://en.wikipedia.org/wiki/Gabor_filterApplications_of_2-D_Gabor_filters_in_image_processing)
gf_real, gf_img = gabor(img_arr, frequency=0.6) gf =(gf_real**2 + gf_img**2)//2 # Displaying the filter response fig, ax = plt.subplots(1,3) ax[0].imshow(gf_real,cmap='gray') ax[1].imshow(gf_img,cmap='gray') ax[2].imshow(gf,cmap='gray') def get_gabor(img, N, *args, **kwargs): """Gabor Feature extraction""" gf_real, gf_img = gabor(img_arr, frequency=0.6) gf =(gf_real**2 + gf_img**2)//2 gabor_hist, _ = np.histogram(gf, N) gabor_hist = np.array(gabor_hist, dtype = float) gabor_prob = np.divide(gabor_hist, np.sum(gabor_hist)) # To discard pixels resulting in 0 probability gabor_prob = np.where(np.isclose(0, gabor_prob), 0.0000000001, gabor_prob) gabor_energy = np.sum(gabor_prob**2) gabor_entropy = np.sum(np.multiply(gabor_prob, np.log2(gabor_prob))) return gabor_energy, gabor_entropy
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Feature Extraction
# Generate Training Data # Extract features from all images label = [] featLength = 2+5+2 # LBP, Co-occurance, Gabor trainFeats = np.zeros((len(trainDset), featLength)) testFeats = np.zeros((len(testDset), featLength)) label = [trainDset[tr][1] for tr in tqdm.tqdm_notebook(range(len(trainFeats)))] trainLabel = np.array(label) for tr in tqdm.tqdm_notebook(range(len(trainFeats))): img = trainDset[tr][0] img_grey = img.convert('L') img_arr = np.array(img_grey.getdata()).reshape(img.size[1], img.size[0]) # LBP feat_lbp = local_binary_pattern(img_arr, 5,2,'uniform').reshape(img.size[0]*img.size[1]) feat_lbp = np.uint8((feat_lbp/feat_lbp.max())*255) # converting to unit 8 lbp_energy, lbp_entropy = get_lbp(feat_lbp) # Co-Occurance gCoMat = greycomatrix(img_arr, [2], [0], 256, True,True) featglcm = np.array(creat_cooccur(img_arr)) # Gabor gabor_energy, gabor_entropy = get_gabor(img_arr, 8) # Concat features concat_feat = np.concatenate(([lbp_energy, lbp_entropy], featglcm, [gabor_energy, gabor_entropy]), axis=0) trainFeats[tr,:] = concat_feat label.append(trainDset[tr][1]) trainLabel = np.array(label) label = [] for ts in tqdm.tqdm_notebook(range(len(testDset))): img = testDset[ts][0] img_grey = img.convert('L') img_arr = np.array(img_grey.getdata()).reshape(img.size[1], img.size[0]) # LBP feat_lbp = local_binary_pattern(img_arr, 5,2,'uniform').reshape(img.size[0]*img.size[1]) lbp_energy, lbp_entropy = get_lbp(feat_lbp) # Co-Occurance gCoMat = greycomatrix(img_arr, [2], [0], 256, True,True) featglcm = np.array(creat_cooccur(img_arr)) # Gabor gabor_energy, gabor_entropy = get_gabor(img_arr, 8) # Concat features concat_feat = np.concatenate(([lbp_energy, lbp_entropy], featglcm, [gabor_energy, gabor_entropy]), axis=0) testFeats[ts,:] = concat_feat label.append(testDset[ts][1]) testLabel = np.array(label)
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Normalize Features
# Normalizing the train features to the range [0,1] trMaxs = np.amax(trainFeats,axis=0) #Finding maximum along each column trMins = np.amin(trainFeats,axis=0) #Finding maximum along each column trMaxs_rep = np.tile(trMaxs,(50000,1)) #Repeating the maximum value along the rows trMins_rep = np.tile(trMins,(50000,1)) #Repeating the minimum value along the rows trainFeatsNorm = np.divide(trainFeats-trMins_rep,trMaxs_rep) #Element-wise division # Normalizing the test features tsMaxs_rep = np.tile(trMaxs,(10000,1)) #Repeating the maximum value along the rows tsMins_rep = np.tile(trMins,(10000,1)) #Repeating the maximum value along the rows testFeatsNorm = np.divide(testFeats-tsMins_rep,tsMaxs_rep) #Element-wise division
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Save Data
with open("TrainFeats.pckl", "wb") as f: pickle.dump(trainFeatsNorm, f) with open("TrainLabel.pckl", "wb") as f: pickle.dump(trainLabel, f) with open("TestFeats.pckl", "wb") as f: pickle.dump(testFeatsNorm, f) with open("TestLabel.pckl", "wb") as f: pickle.dump(testLabel, f) print("files Saved!")
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Classification with SoftMax Regression Data Preparation
########################## ### SETTINGS ########################## # Device DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Hyperparameters random_seed = 123 learning_rate = 0.01 num_epochs = 100 batch_size = 64 # Architecture num_features = 9 num_classes = 10 ########################## ### CIFAR10 DATASET ########################## ## Converting Numpy array to Torch-Tensor trainLabels = torch.from_numpy(trainLabel) trainDataset = TensorDataset(torch.from_numpy(trainFeats),trainLabels) testLabels = torch.from_numpy(testLabel) testDataset = TensorDataset(torch.from_numpy(testFeats), testLabels) ## Creating DataLoader train_loader = DataLoader(trainDataset, batch_size=batch_size, shuffle=True) test_loader = DataLoader(testDataset,batch_size=batch_size,shuffle=False)
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Define Model
########################## ### MODEL ########################## class SoftmaxRegression(torch.nn.Module): def __init__(self, num_features, num_classes): super(SoftmaxRegression, self).__init__() self.linear = torch.nn.Linear(num_features, num_classes) # self.linear.weight.detach().zero_() # self.linear.bias.detach().zero_() def forward(self, x): logits = self.linear(x) probas = F.softmax(logits, dim=1) return logits, probas model = SoftmaxRegression(num_features=num_features, num_classes=num_classes) model.to(DEVICE) ########################## ### COST AND OPTIMIZER ########################## optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Define Training Route
# Manual seed for deterministic data loader torch.manual_seed(random_seed) def compute_accuracy(model, data_loader): correct_pred, num_examples = 0, 0 for features, targets in data_loader: features = features.float().view(-1, 9).to(DEVICE) targets = targets.to(DEVICE) logits, probas = model(features) _, predicted_labels = torch.max(probas, 1) num_examples += targets.size(0) correct_pred += (predicted_labels == targets).sum() return correct_pred.float() / num_examples * 100 start_time = time.time() epoch_costs = [] for epoch in range(num_epochs): avg_cost = 0. for batch_idx, (features, targets) in enumerate(train_loader): features = features.float().view(-1, 9).to(DEVICE) targets = targets.to(DEVICE) ### FORWARD AND BACK PROP logits, probas = model(features) # note that the PyTorch implementation of # CrossEntropyLoss works with logits, not # probabilities cost = F.cross_entropy(logits, targets) optimizer.zero_grad() cost.backward() avg_cost += cost ### UPDATE MODEL PARAMETERS optimizer.step() ### LOGGING if not batch_idx % 50: print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f' %(epoch+1, num_epochs, batch_idx, len(trainDataset)//batch_size, cost)) with torch.set_grad_enabled(False): avg_cost = avg_cost/len(trainDataset) epoch_costs.append(avg_cost) print('Epoch: %03d/%03d training accuracy: %.2f%%' % ( epoch+1, num_epochs, compute_accuracy(model, train_loader))) print('Time elapsed: %.2f min' % ((time.time() - start_time)/60))
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Model Performance
%matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.plot(epoch_costs) plt.ylabel('Avg Cross Entropy Loss\n(approximated by averaging over minibatches)') plt.xlabel('Epoch') plt.show() print(f'Train accuracy: {(compute_accuracy(model, train_loader)): .2f}%') print(f'Train accuracy: {(compute_accuracy(model, test_loader)): .2f}%')
Train accuracy: 24.92% Train accuracy: 25.29%
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Comments- This was a demonstration of how we can use manually crafted features in Image Classification tasks.- The model can be improved in several ways: - Tweaking the parameters to modify features generated for **_LBP, Co-Occurance Matrix and Gabor Filter_** - Extending the parameters for Red,Blue and Green channels. - Modifying the Learning rate, Epochs. - Trying a different Algorithm such as Multi Layer Perceptron.- The results aren't great but offer a glimpse of manually creating features from images. {{"Maria Petrou, Pedro Garcia Sevilla. _Image Processing: Dealing with Texture_. John Wiley & Sons, Ltd (2006)" | fndetail: 1 }}
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
TensorFlow 2.0+ Low Level APIs Convert ExampleThis example demonstrates the workflow to build a model usingTensorFlow 2.0+ low-level APIs and convert it to Core ML `.mlmodel` format using the `coremltools.converters.tensorflow` converter.For more example, refer `test_tf_2x.py` file.Note: - This notebook was tested with following dependencies:```tensorflow==2.0.0coremltools==3.1```- Models from TensorFlow 2.0+ is supported only for `minimum_ios_deployment_target>=13`.You can also use `tfcoreml.convert()` instead of `coremltools.converters.tensorflow.convert()` to convert your model.
import tensorflow as tf import numpy as np import coremltools print(tf.__version__) print(coremltools.__version__)
WARNING: Logging before flag parsing goes to stderr. W1101 14:02:33.174557 4762860864 __init__.py:74] TensorFlow version 2.0.0 detected. Last version known to be fully compatible is 1.14.0 .
BSD-3-Clause
examples/neural_network_inference/tensorflow_converter/Tensorflow_2/tf_low_level_apis.ipynb
DreamChaserMXF/coremltools
Using Low-Level APIs
# construct a toy model with low level APIs root = tf.train.Checkpoint() root.v1 = tf.Variable(3.) root.v2 = tf.Variable(2.) root.f = tf.function(lambda x: root.v1 * root.v2 * x) # save the model saved_model_dir = './tf_model' input_data = tf.constant(1., shape=[1, 1]) to_save = root.f.get_concrete_function(input_data) tf.saved_model.save(root, saved_model_dir, to_save) tf_model = tf.saved_model.load(saved_model_dir) concrete_func = tf_model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY] # convert model into Core ML format model = coremltools.converters.tensorflow.convert( [concrete_func], inputs={'x': (1, 1)}, outputs=['Identity'] ) assert isinstance(model, coremltools.models.MLModel)
0 assert nodes deleted ['Func/StatefulPartitionedCall/input/_2:0', 'StatefulPartitionedCall/mul/ReadVariableOp:0', 'statefulpartitionedcall_args_1:0', 'Func/StatefulPartitionedCall/input/_3:0', 'StatefulPartitionedCall/mul:0', 'StatefulPartitionedCall/ReadVariableOp:0', 'statefulpartitionedcall_args_2:0'] 6 nodes deleted 0 nodes deleted 0 nodes deleted 2 identity nodes deleted 0 disconnected nodes deleted [SSAConverter] Converting function main ... [SSAConverter] [1/3] Converting op type: 'Placeholder', name: 'x', output_shape: (1, 1). [SSAConverter] [2/3] Converting op type: 'Const', name: 'StatefulPartitionedCall/mul'. [SSAConverter] [3/3] Converting op type: 'Mul', name: 'Identity', output_shape: (1, 1).
BSD-3-Clause
examples/neural_network_inference/tensorflow_converter/Tensorflow_2/tf_low_level_apis.ipynb
DreamChaserMXF/coremltools
Using Control Flow
# construct a TensorFlow 2.0+ model with tf.function() @tf.function(input_signature=[tf.TensorSpec([], tf.float32)]) def control_flow(x): if x <= 0: return 0. else: return x * 3. to_save = tf.Module() to_save.control_flow = control_flow saved_model_dir = './tf_model' tf.saved_model.save(to_save, saved_model_dir) tf_model = tf.saved_model.load(saved_model_dir) concrete_func = tf_model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY] # convert model into Core ML format model = coremltools.converters.tensorflow.convert( [concrete_func], inputs={'x': (1,)}, outputs=['Identity'] ) assert isinstance(model, coremltools.models.MLModel) # try with some sample inputs inputs = [-3.7, 6.17, 0.0, 1984., -5.] for data in inputs: out1 = to_save.control_flow(data).numpy() out2 = model.predict({'x': np.array([data])})['Identity'] np.testing.assert_array_almost_equal(out1, out2)
_____no_output_____
BSD-3-Clause
examples/neural_network_inference/tensorflow_converter/Tensorflow_2/tf_low_level_apis.ipynb
DreamChaserMXF/coremltools
Using `tf.keras` Subclassing APIs
class MyModel(tf.keras.Model): def __init__(self): super(MyModel, self).__init__() self.dense1 = tf.keras.layers.Dense(4) self.dense2 = tf.keras.layers.Dense(5) @tf.function def call(self, input_data): return self.dense2(self.dense1(input_data)) keras_model = MyModel() inputs = np.random.rand(4, 4) # subclassed model can only be saved as SavedModel format keras_model._set_inputs(inputs) saved_model_dir = './tf_model_subclassing' keras_model.save(saved_model_dir, save_format='tf') # convert and validate model = coremltools.converters.tensorflow.convert( saved_model_dir, inputs={'input_1': (4, 4)}, outputs=['Identity'] ) assert isinstance(model, coremltools.models.MLModel) # verify the prediction matches keras_prediction = keras_model.predict(inputs) prediction = model.predict({'input_1': inputs})['Identity'] np.testing.assert_array_equal(keras_prediction.shape, prediction.shape) np.testing.assert_almost_equal(keras_prediction.flatten(), prediction.flatten(), decimal=4)
0 assert nodes deleted ['my_model/StatefulPartitionedCall/args_3:0', 'Func/my_model/StatefulPartitionedCall/input/_2:0', 'Func/my_model/StatefulPartitionedCall/StatefulPartitionedCall/input/_11:0', 'my_model/StatefulPartitionedCall/args_4:0', 'Func/my_model/StatefulPartitionedCall/input/_4:0', 'Func/my_model/StatefulPartitionedCall/StatefulPartitionedCall/input/_12:0', 'my_model/StatefulPartitionedCall/args_2:0', 'my_model/StatefulPartitionedCall/StatefulPartitionedCall/dense_1/StatefulPartitionedCall/MatMul/ReadVariableOp:0', 'Func/my_model/StatefulPartitionedCall/StatefulPartitionedCall/dense_1/StatefulPartitionedCall/input/_25:0', 'Func/my_model/StatefulPartitionedCall/input/_3:0', 'Func/my_model/StatefulPartitionedCall/StatefulPartitionedCall/input/_13:0', 'Func/my_model/StatefulPartitionedCall/input/_5:0', 'Func/my_model/StatefulPartitionedCall/StatefulPartitionedCall/input/_10:0', 'Func/my_model/StatefulPartitionedCall/StatefulPartitionedCall/dense_1/StatefulPartitionedCall/input/_24:0', 'my_model/StatefulPartitionedCall/args_1:0', 'Func/my_model/StatefulPartitionedCall/StatefulPartitionedCall/dense/StatefulPartitionedCall/input/_18:0', 'my_model/StatefulPartitionedCall/StatefulPartitionedCall/dense/StatefulPartitionedCall/BiasAdd/ReadVariableOp:0', 'my_model/StatefulPartitionedCall/StatefulPartitionedCall/dense/StatefulPartitionedCall/MatMul/ReadVariableOp:0', 'my_model/StatefulPartitionedCall/StatefulPartitionedCall/dense_1/StatefulPartitionedCall/BiasAdd/ReadVariableOp:0', 'Func/my_model/StatefulPartitionedCall/StatefulPartitionedCall/dense/StatefulPartitionedCall/input/_19:0'] 16 nodes deleted 0 nodes deleted 0 nodes deleted [Op Fusion] fuse_bias_add() deleted 4 nodes. 2 identity nodes deleted 2 disconnected nodes deleted [SSAConverter] Converting function main ... [SSAConverter] [1/3] Converting op type: 'Placeholder', name: 'input_1', output_shape: (4, 4). [SSAConverter] [2/3] Converting op type: 'MatMul', name: 'my_model/StatefulPartitionedCall/StatefulPartitionedCall/dense/StatefulPartitionedCall/MatMul', output_shape: (4, 4). [SSAConverter] [3/3] Converting op type: 'MatMul', name: 'Identity', output_shape: (4, 5).
BSD-3-Clause
examples/neural_network_inference/tensorflow_converter/Tensorflow_2/tf_low_level_apis.ipynb
DreamChaserMXF/coremltools
prepared by Abuzer Yakaryilmaz (QLatvia) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Vectors: One Dimensional ListsA vector is a list of numbers. Vectors are very useful to describe the state of a system, as we will see in the main tutorial. A list is a single object in python.Similarly, a vector is a single mathematical object. The number of elements in a list is its size or length.Similarly, the number of entries in a vector is called as the size or dimension of the vector.
# consider the following list with 4 elements L = [1,-2,0,5] print(L)
_____no_output_____
Apache-2.0
math/Math20_Vectors.ipynb
QPoland/basics-of-quantum-computing-pl
Vectors can be in horizontal or vertical shape.We show this list as a four dimensional row vector (horizontal) or a column vector (vertical):$$ u = \mypar{1~~-2~~0~~-5} ~~~\mbox{ or }~~~ v =\mymatrix{r}{1 \\ -2 \\ 0 \\ 5}, ~~~\mbox{ respectively.}$$Remark that we do not need to use any comma in vector representation. Multiplying a vector with a numberA vector can be multiplied by a number.Multiplication of a vector with a number is also a vector: each entry is multiplied by this number.$$ 3 \cdot v = 3 \cdot \mymatrix{r}{1 \\ -2 \\ 0 \\ 5} = \mymatrix{r}{3 \\ -6 \\ 0 \\ 15} ~~~~~~\mbox{ or }~~~~~~ (-0.6) \cdot v = (-0.6) \cdot \mymatrix{r}{1 \\ -2 \\ 0 \\ 5} = \mymatrix{r}{-0.6 \\ 1.2 \\ 0 \\ -3}.$$We may consider this as enlarging or making smaller the entries of a vector.We verify our calculations in python.
# 3 * v v = [1,-2,0,5] print("v is",v) # we use the same list for the result for i in range(len(v)): v[i] = 3 * v[i] print("3v is",v) # -0.6 * u # reinitialize the list v v = [1,-2,0,5] for i in range(len(v)): v[i] = -0.6 * v[i] print("0.6v is",v)
_____no_output_____
Apache-2.0
math/Math20_Vectors.ipynb
QPoland/basics-of-quantum-computing-pl
Summation of vectorsTwo vectors (with same dimension) can be summed up.The summation of two vectors is a vector: the numbers on the same entries are added up.$$ u = \myrvector{-3 \\ -2 \\ 0 \\ -1 \\ 4} \mbox{ and } v = \myrvector{-1\\ -1 \\2 \\ -3 \\ 5}. ~~~~~~~ \mbox{Then, }~~ u+v = \myrvector{-3 \\ -2 \\ 0 \\ -1 \\ 4} + \myrvector{-1\\ -1 \\2 \\ -3 \\ 5} = \myrvector{-3+(-1)\\ -2+(-1) \\0+2 \\ -1+(-3) \\ 4+5} = \myrvector{-4\\ -3 \\2 \\ -4 \\ 9}.$$We do the same calculations in Python.
u = [-3,-2,0,-1,4] v = [-1,-1,2,-3,5] result=[] for i in range(len(u)): result.append(u[i]+v[i]) print("u+v is",result) # print the result vector similarly to a column vector print() # print an empty line print("the elements of u+v are") for j in range(len(result)): print(result[j])
_____no_output_____
Apache-2.0
math/Math20_Vectors.ipynb
QPoland/basics-of-quantum-computing-pl
Task 1 Create two 7-dimensional vectors $u$ and $ v $ as two different lists in Python having entries randomly picked between $-10$ and $10$. Print their entries.
from random import randrange # # your solution is here # #r=randrange(-10,11) # randomly pick a number from the list {-10,-9,...,-1,0,1,...,9,10}
_____no_output_____
Apache-2.0
math/Math20_Vectors.ipynb
QPoland/basics-of-quantum-computing-pl
click for our solution Task 2 By using the same vectors, find the vector $ (3 u-2 v) $ and print its entries. Here $ 3u $ and $ 2v $ means $u$ and $v$ are multiplied by $3$ and $2$, respectively.
# # your solution is here #
_____no_output_____
Apache-2.0
math/Math20_Vectors.ipynb
QPoland/basics-of-quantum-computing-pl
click for our solution Visualization of vectors We can visualize the vectors with dimension at most 3. For simplicity, we give examples of 2-dimensional vectors. Consider the vector $ v = \myvector{1 \\ 2} $. A 2-dimensional vector can be represented on the two-dimensional plane by an arrow starting from the origin $ (0,0) $ to the point $ (1,2) $. We represent the vectors $ 2v = \myvector{2 \\ 4} $ and $ -v = \myvector{-1 \\ -2} $ below.As we can observe, after multiplying by 2, the vector is enlarged, and, after multiplying by $(-1)$, the vector is the same but its direction is opposite. The length of a vector The length of a vector is the (shortest) distance from the points represented by the entries of vector to the origin point $(0,0)$.The length of a vector can be calculated by using Pythagoras Theorem. We visualize a vector, its length, and the contributions of each entry to the length. Consider the vector $ u = \myrvector{-3 \\ 4} $. The length of $ u $ is denoted as $ \norm{u} $, and it is calculated as $ \norm{u} =\sqrt{(-3)^2+4^2} = 5 $. Here each entry contributes with its square value. All contributions are summed up. Then, we obtain the square of the length. This formula is generalized to any dimension. We find the length of the following vector by using Python: $$ v = \myrvector{-1 \\ -3 \\ 5 \\ 3 \\ 1 \\ 2} ~~~~~~~~~~ \mbox{and} ~~~~~~~~~~ \norm{v} = \sqrt{(-1)^2+(-3)^2+5^2+3^2+1^2+2^2} .$$ Remember: There is a short way of writing power operation in Python. In its generic form: $ a^x $ can be denoted by $ a ** x $ in Python. The square of a number $a$: $ a^2 $ can be denoted by $ a ** 2 $ in Python. The square root of a number $ a $: $ \sqrt{a} = a^{\frac{1}{2}} = a^{0.5} $ can be denoted by $ a ** 0.5 $ in Python.
v = [-1,-3,5,3,1,2] length_square=0 for i in range(len(v)): print(v[i],":square ->",v[i]**2) # print each entry and its square value length_square = length_square + v[i]**2 # sum up the square of each entry length = length_square ** 0.5 # take the square root of the summation of the squares of all entries print("the summation is",length_square) print("then the length is",length) # for square root, we can also use built-in function math.sqrt print() # print an empty line from math import sqrt print("the square root of",length_square,"is",sqrt(length_square))
_____no_output_____
Apache-2.0
math/Math20_Vectors.ipynb
QPoland/basics-of-quantum-computing-pl
Task 3 Let $ u = \myrvector{1 \\ -2 \\ -4 \\ 2} $ be a four dimensional vector.Verify that $ \norm{4 u} = 4 \cdot \norm{u} $ in Python. Remark that $ 4u $ is another vector obtained from $ u $ by multiplying it with 4.
# # your solution is here #
_____no_output_____
Apache-2.0
math/Math20_Vectors.ipynb
QPoland/basics-of-quantum-computing-pl
click for our solution Notes:When a vector is multiplied by a number, then its length is also multiplied with the same number.But, we should be careful with the sign.Consider the vector $ -3 v $. It has the same length of $ 3v $, but its direction is opposite.So, when calculating the length of $ -3 v $, we use absolute value of the number:$ \norm{-3 v} = |-3| \norm{v} = 3 \norm{v} $.Here $ |-3| $ is the absolute value of $ -3 $. The absolute value of a number is its distance to 0. So, $ |-3| = 3 $. Task 4 Let $ u = \myrvector{1 \\ -2 \\ -4 \\ 2} $ be a four dimensional vector.Randomly pick a number $r$ from $ \left\{ \dfrac{1}{10}, \dfrac{2}{10}, \cdots, \dfrac{9}{10} \right\} $.Find the vector $(-r)\cdot u$ and then its length.
# # your solution is here #
_____no_output_____
Apache-2.0
math/Math20_Vectors.ipynb
QPoland/basics-of-quantum-computing-pl
External data> Helper functions used to download and extract common time series datasets.
#export from tsai.imports import * from tsai.utils import * from tsai.data.validation import * #export from sktime.utils.data_io import load_from_tsfile_to_dataframe as ts2df from sktime.utils.validation.panel import check_X from sktime.utils.data_io import TsFileParseException #export from fastai.data.external import * from tqdm import tqdm import zipfile import tempfile try: from urllib import urlretrieve except ImportError: from urllib.request import urlretrieve import shutil from numpy import distutils import distutils #export def decompress_from_url(url, target_dir=None, verbose=False): # Download try: pv("downloading data...", verbose) fname = os.path.basename(url) tmpdir = tempfile.mkdtemp() tmpfile = os.path.join(tmpdir, fname) urlretrieve(url, tmpfile) pv("...data downloaded", verbose) # Decompress try: pv("decompressing data...", verbose) if not os.path.exists(target_dir): os.makedirs(target_dir) shutil.unpack_archive(tmpfile, target_dir) shutil.rmtree(tmpdir) pv("...data decompressed", verbose) return target_dir except: shutil.rmtree(tmpdir) if verbose: sys.stderr.write("Could not decompress file, aborting.\n") except: shutil.rmtree(tmpdir) if verbose: sys.stderr.write("Could not download url. Please, check url.\n") #export from fastdownload import download_url def download_data(url, fname=None, c_key='archive', force_download=False, timeout=4, verbose=False): "Download `url` to `fname`." fname = Path(fname or URLs.path(url, c_key=c_key)) fname.parent.mkdir(parents=True, exist_ok=True) if not fname.exists() or force_download: download_url(url, dest=fname, timeout=timeout, show_progress=verbose) return fname # export def get_UCR_univariate_list(): return [ 'ACSF1', 'Adiac', 'AllGestureWiimoteX', 'AllGestureWiimoteY', 'AllGestureWiimoteZ', 'ArrowHead', 'Beef', 'BeetleFly', 'BirdChicken', 'BME', 'Car', 'CBF', 'Chinatown', 'ChlorineConcentration', 'CinCECGTorso', 'Coffee', 'Computers', 'CricketX', 'CricketY', 'CricketZ', 'Crop', 'DiatomSizeReduction', 'DistalPhalanxOutlineAgeGroup', 'DistalPhalanxOutlineCorrect', 'DistalPhalanxTW', 'DodgerLoopDay', 'DodgerLoopGame', 'DodgerLoopWeekend', 'Earthquakes', 'ECG200', 'ECG5000', 'ECGFiveDays', 'ElectricDevices', 'EOGHorizontalSignal', 'EOGVerticalSignal', 'EthanolLevel', 'FaceAll', 'FaceFour', 'FacesUCR', 'FiftyWords', 'Fish', 'FordA', 'FordB', 'FreezerRegularTrain', 'FreezerSmallTrain', 'Fungi', 'GestureMidAirD1', 'GestureMidAirD2', 'GestureMidAirD3', 'GesturePebbleZ1', 'GesturePebbleZ2', 'GunPoint', 'GunPointAgeSpan', 'GunPointMaleVersusFemale', 'GunPointOldVersusYoung', 'Ham', 'HandOutlines', 'Haptics', 'Herring', 'HouseTwenty', 'InlineSkate', 'InsectEPGRegularTrain', 'InsectEPGSmallTrain', 'InsectWingbeatSound', 'ItalyPowerDemand', 'LargeKitchenAppliances', 'Lightning2', 'Lightning7', 'Mallat', 'Meat', 'MedicalImages', 'MelbournePedestrian', 'MiddlePhalanxOutlineAgeGroup', 'MiddlePhalanxOutlineCorrect', 'MiddlePhalanxTW', 'MixedShapesRegularTrain', 'MixedShapesSmallTrain', 'MoteStrain', 'NonInvasiveFetalECGThorax1', 'NonInvasiveFetalECGThorax2', 'OliveOil', 'OSULeaf', 'PhalangesOutlinesCorrect', 'Phoneme', 'PickupGestureWiimoteZ', 'PigAirwayPressure', 'PigArtPressure', 'PigCVP', 'PLAID', 'Plane', 'PowerCons', 'ProximalPhalanxOutlineAgeGroup', 'ProximalPhalanxOutlineCorrect', 'ProximalPhalanxTW', 'RefrigerationDevices', 'Rock', 'ScreenType', 'SemgHandGenderCh2', 'SemgHandMovementCh2', 'SemgHandSubjectCh2', 'ShakeGestureWiimoteZ', 'ShapeletSim', 'ShapesAll', 'SmallKitchenAppliances', 'SmoothSubspace', 'SonyAIBORobotSurface1', 'SonyAIBORobotSurface2', 'StarLightCurves', 'Strawberry', 'SwedishLeaf', 'Symbols', 'SyntheticControl', 'ToeSegmentation1', 'ToeSegmentation2', 'Trace', 'TwoLeadECG', 'TwoPatterns', 'UMD', 'UWaveGestureLibraryAll', 'UWaveGestureLibraryX', 'UWaveGestureLibraryY', 'UWaveGestureLibraryZ', 'Wafer', 'Wine', 'WordSynonyms', 'Worms', 'WormsTwoClass', 'Yoga' ] test_eq(len(get_UCR_univariate_list()), 128) UTSC_datasets = get_UCR_univariate_list() UCR_univariate_list = get_UCR_univariate_list() #export def get_UCR_multivariate_list(): return [ 'ArticularyWordRecognition', 'AtrialFibrillation', 'BasicMotions', 'CharacterTrajectories', 'Cricket', 'DuckDuckGeese', 'EigenWorms', 'Epilepsy', 'ERing', 'EthanolConcentration', 'FaceDetection', 'FingerMovements', 'HandMovementDirection', 'Handwriting', 'Heartbeat', 'InsectWingbeat', 'JapaneseVowels', 'Libras', 'LSST', 'MotorImagery', 'NATOPS', 'PEMS-SF', 'PenDigits', 'PhonemeSpectra', 'RacketSports', 'SelfRegulationSCP1', 'SelfRegulationSCP2', 'SpokenArabicDigits', 'StandWalkJump', 'UWaveGestureLibrary' ] test_eq(len(get_UCR_multivariate_list()), 30) MTSC_datasets = get_UCR_multivariate_list() UCR_multivariate_list = get_UCR_multivariate_list() UCR_list = sorted(UCR_univariate_list + UCR_multivariate_list) classification_list = UCR_list TSC_datasets = classification_datasets = UCR_list len(UCR_list) #export def get_UCR_data(dsid, path='.', parent_dir='data/UCR', on_disk=True, mode='c', Xdtype='float32', ydtype=None, return_split=True, split_data=True, force_download=False, verbose=False): dsid_list = [ds for ds in UCR_list if ds.lower() == dsid.lower()] assert len(dsid_list) > 0, f'{dsid} is not a UCR dataset' dsid = dsid_list[0] return_split = return_split and split_data # keep return_split for compatibility. It will be replaced by split_data if dsid in ['InsectWingbeat']: warnings.warn(f'Be aware that download of the {dsid} dataset is very slow!') pv(f'Dataset: {dsid}', verbose) full_parent_dir = Path(path)/parent_dir full_tgt_dir = full_parent_dir/dsid # if not os.path.exists(full_tgt_dir): os.makedirs(full_tgt_dir) full_tgt_dir.parent.mkdir(parents=True, exist_ok=True) if force_download or not all([os.path.isfile(f'{full_tgt_dir}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]): # Option A src_website = 'http://www.timeseriesclassification.com/Downloads' decompress_from_url(f'{src_website}/{dsid}.zip', target_dir=full_tgt_dir, verbose=verbose) if dsid == 'DuckDuckGeese': with zipfile.ZipFile(Path(f'{full_parent_dir}/DuckDuckGeese/DuckDuckGeese_ts.zip'), 'r') as zip_ref: zip_ref.extractall(Path(parent_dir)) if not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or \ Path(full_tgt_dir/f'{dsid}_TRAIN.ts').stat().st_size == 0 or Path(full_tgt_dir/f'{dsid}_TEST.ts').stat().st_size == 0: print('It has not been possible to download the required files') if return_split: return None, None, None, None else: return None, None, None pv('loading ts files to dataframe...', verbose) X_train_df, y_train = ts2df(full_tgt_dir/f'{dsid}_TRAIN.ts') X_valid_df, y_valid = ts2df(full_tgt_dir/f'{dsid}_TEST.ts') pv('...ts files loaded', verbose) pv('preparing numpy arrays...', verbose) X_train_ = [] X_valid_ = [] for i in progress_bar(range(X_train_df.shape[-1]), display=verbose, leave=False): X_train_.append(stack_pad(X_train_df[f'dim_{i}'])) # stack arrays even if they have different lengths X_valid_.append(stack_pad(X_valid_df[f'dim_{i}'])) # stack arrays even if they have different lengths X_train = np.transpose(np.stack(X_train_, axis=-1), (0, 2, 1)) X_valid = np.transpose(np.stack(X_valid_, axis=-1), (0, 2, 1)) X_train, X_valid = match_seq_len(X_train, X_valid) np.save(f'{full_tgt_dir}/X_train.npy', X_train) np.save(f'{full_tgt_dir}/y_train.npy', y_train) np.save(f'{full_tgt_dir}/X_valid.npy', X_valid) np.save(f'{full_tgt_dir}/y_valid.npy', y_valid) np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid)) np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid)) del X_train, X_valid, y_train, y_valid delete_all_in_dir(full_tgt_dir, exception='.npy') pv('...numpy arrays correctly saved', verbose) mmap_mode = mode if on_disk else None X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode) y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode) X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode) y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode) if return_split: if Xdtype is not None: X_train = X_train.astype(Xdtype) X_valid = X_valid.astype(Xdtype) if ydtype is not None: y_train = y_train.astype(ydtype) y_valid = y_valid.astype(ydtype) if verbose: print('X_train:', X_train.shape) print('y_train:', y_train.shape) print('X_valid:', X_valid.shape) print('y_valid:', y_valid.shape, '\n') return X_train, y_train, X_valid, y_valid else: X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode) y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode) splits = get_predefined_splits(X_train, X_valid) if Xdtype is not None: X = X.astype(Xdtype) if verbose: print('X :', X .shape) print('y :', y .shape) print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\n') return X, y, splits get_classification_data = get_UCR_data #hide PATH = Path('.') dsids = ['ECGFiveDays', 'AtrialFibrillation'] # univariate and multivariate for dsid in dsids: print(dsid) tgt_dir = PATH/f'data/UCR/{dsid}' if os.path.isdir(tgt_dir): shutil.rmtree(tgt_dir) test_eq(len(get_files(tgt_dir)), 0) # no file left X_train, y_train, X_valid, y_valid = get_UCR_data(dsid) test_eq(len(get_files(tgt_dir, '.npy')), 6) test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir del X_train, y_train, X_valid, y_valid start = time.time() X_train, y_train, X_valid, y_valid = get_UCR_data(dsid) elapsed = time.time() - start test_eq(elapsed < 1, True) test_eq(X_train.ndim, 3) test_eq(y_train.ndim, 1) test_eq(X_valid.ndim, 3) test_eq(y_valid.ndim, 1) test_eq(len(get_files(tgt_dir, '.npy')), 6) test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir test_eq(X_train.ndim, 3) test_eq(y_train.ndim, 1) test_eq(X_valid.ndim, 3) test_eq(y_valid.ndim, 1) test_eq(X_train.dtype, np.float32) test_eq(X_train.__class__.__name__, 'memmap') del X_train, y_train, X_valid, y_valid X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, on_disk=False) test_eq(X_train.__class__.__name__, 'ndarray') del X_train, y_train, X_valid, y_valid X_train, y_train, X_valid, y_valid = get_UCR_data('natops') dsid = 'natops' X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, verbose=True) X, y, splits = get_UCR_data(dsid, split_data=False) test_eq(X[splits[0]], X_train) test_eq(y[splits[1]], y_valid) test_eq(X[splits[0]], X_train) test_eq(y[splits[1]], y_valid) test_type(X, X_train) test_type(y, y_train) #export def check_data(X, y=None, splits=None, show_plot=True): try: X_is_nan = np.isnan(X).sum() except: X_is_nan = 'couldn not be checked' if X.ndim == 3: shape = f'[{X.shape[0]} samples x {X.shape[1]} features x {X.shape[-1]} timesteps]' print(f'X - shape: {shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}') else: print(f'X - shape: {X.shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}') if not isinstance(X, np.ndarray): warnings.warn('X must be a np.ndarray') if X_is_nan: warnings.warn('X must not contain nan values') if y is not None: y_shape = y.shape y = y.ravel() if isinstance(y[0], str): n_classes = f'{len(np.unique(y))} ({len(y)//len(np.unique(y))} samples per class) {L(np.unique(y).tolist())}' y_is_nan = 'nan' in [c.lower() for c in np.unique(y)] print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} n_classes: {n_classes} isnan: {y_is_nan}') else: y_is_nan = np.isnan(y).sum() print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} isnan: {y_is_nan}') if not isinstance(y, np.ndarray): warnings.warn('y must be a np.ndarray') if y_is_nan: warnings.warn('y must not contain nan values') if splits is not None: _splits = get_splits_len(splits) overlap = check_splits_overlap(splits) print(f'splits - n_splits: {len(_splits)} shape: {_splits} overlap: {overlap}') if show_plot: plot_splits(splits) dsid = 'ECGFiveDays' X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=True) check_data(X, y, splits) check_data(X[:, 0], y, splits) y = y.astype(np.float32) check_data(X, y, splits) y[:10] = np.nan check_data(X[:, 0], y, splits) X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=True) splits = get_splits(y, 3) check_data(X, y, splits) check_data(X[:, 0], y, splits) y[:5]= np.nan check_data(X[:, 0], y, splits) X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=True) #export # This code comes from https://github.com/ChangWeiTan/TSRegression. As of Jan 16th, 2021 there's no pip install available. # The following code is adapted from the python package sktime to read .ts file. class _TsFileParseException(Exception): """ Should be raised when parsing a .ts file and the format is incorrect. """ pass def _load_from_tsfile_to_dataframe2(full_file_path_and_name, return_separate_X_and_y=True, replace_missing_vals_with='NaN'): """Loads data from a .ts file into a Pandas DataFrame. Parameters ---------- full_file_path_and_name: str The full pathname of the .ts file to read. return_separate_X_and_y: bool true if X and Y values should be returned as separate Data Frames (X) and a numpy array (y), false otherwise. This is only relevant for data that replace_missing_vals_with: str The value that missing values in the text file should be replaced with prior to parsing. Returns ------- DataFrame, ndarray If return_separate_X_and_y then a tuple containing a DataFrame and a numpy array containing the relevant time-series and corresponding class values. DataFrame If not return_separate_X_and_y then a single DataFrame containing all time-series and (if relevant) a column "class_vals" the associated class values. """ # Initialize flags and variables used when parsing the file metadata_started = False data_started = False has_problem_name_tag = False has_timestamps_tag = False has_univariate_tag = False has_class_labels_tag = False has_target_labels_tag = False has_data_tag = False previous_timestamp_was_float = None previous_timestamp_was_int = None previous_timestamp_was_timestamp = None num_dimensions = None is_first_case = True instance_list = [] class_val_list = [] line_num = 0 # Parse the file # print(full_file_path_and_name) with open(full_file_path_and_name, 'r', encoding='utf-8') as file: for line in tqdm(file): # print(".", end='') # Strip white space from start/end of line and change to lowercase for use below line = line.strip().lower() # Empty lines are valid at any point in a file if line: # Check if this line contains metadata # Please note that even though metadata is stored in this function it is not currently published externally if line.startswith("@problemname"): # Check that the data has not started if data_started: raise _TsFileParseException("metadata must come before data") # Check that the associated value is valid tokens = line.split(' ') token_len = len(tokens) if token_len == 1: raise _TsFileParseException("problemname tag requires an associated value") problem_name = line[len("@problemname") + 1:] has_problem_name_tag = True metadata_started = True elif line.startswith("@timestamps"): # Check that the data has not started if data_started: raise _TsFileParseException("metadata must come before data") # Check that the associated value is valid tokens = line.split(' ') token_len = len(tokens) if token_len != 2: raise _TsFileParseException("timestamps tag requires an associated Boolean value") elif tokens[1] == "true": timestamps = True elif tokens[1] == "false": timestamps = False else: raise _TsFileParseException("invalid timestamps value") has_timestamps_tag = True metadata_started = True elif line.startswith("@univariate"): # Check that the data has not started if data_started: raise _TsFileParseException("metadata must come before data") # Check that the associated value is valid tokens = line.split(' ') token_len = len(tokens) if token_len != 2: raise _TsFileParseException("univariate tag requires an associated Boolean value") elif tokens[1] == "true": univariate = True elif tokens[1] == "false": univariate = False else: raise _TsFileParseException("invalid univariate value") has_univariate_tag = True metadata_started = True elif line.startswith("@classlabel"): # Check that the data has not started if data_started: raise _TsFileParseException("metadata must come before data") # Check that the associated value is valid tokens = line.split(' ') token_len = len(tokens) if token_len == 1: raise _TsFileParseException("classlabel tag requires an associated Boolean value") if tokens[1] == "true": class_labels = True elif tokens[1] == "false": class_labels = False else: raise _TsFileParseException("invalid classLabel value") # Check if we have any associated class values if token_len == 2 and class_labels: raise _TsFileParseException("if the classlabel tag is true then class values must be supplied") has_class_labels_tag = True class_label_list = [token.strip() for token in tokens[2:]] metadata_started = True elif line.startswith("@targetlabel"): # Check that the data has not started if data_started: raise _TsFileParseException("metadata must come before data") # Check that the associated value is valid tokens = line.split(' ') token_len = len(tokens) if token_len == 1: raise _TsFileParseException("targetlabel tag requires an associated Boolean value") if tokens[1] == "true": target_labels = True elif tokens[1] == "false": target_labels = False else: raise _TsFileParseException("invalid targetLabel value") has_target_labels_tag = True class_val_list = [] metadata_started = True # Check if this line contains the start of data elif line.startswith("@data"): if line != "@data": raise _TsFileParseException("data tag should not have an associated value") if data_started and not metadata_started: raise _TsFileParseException("metadata must come before data") else: has_data_tag = True data_started = True # If the 'data tag has been found then metadata has been parsed and data can be loaded elif data_started: # Check that a full set of metadata has been provided incomplete_regression_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_target_labels_tag or not has_data_tag incomplete_classification_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_class_labels_tag or not has_data_tag if incomplete_regression_meta_data and incomplete_classification_meta_data: raise _TsFileParseException("a full set of metadata has not been provided before the data") # Replace any missing values with the value specified line = line.replace("?", replace_missing_vals_with) # Check if we dealing with data that has timestamps if timestamps: # We're dealing with timestamps so cannot just split line on ':' as timestamps may contain one has_another_value = False has_another_dimension = False timestamps_for_dimension = [] values_for_dimension = [] this_line_num_dimensions = 0 line_len = len(line) char_num = 0 while char_num < line_len: # Move through any spaces while char_num < line_len and str.isspace(line[char_num]): char_num += 1 # See if there is any more data to read in or if we should validate that read thus far if char_num < line_len: # See if we have an empty dimension (i.e. no values) if line[char_num] == ":": if len(instance_list) < (this_line_num_dimensions + 1): instance_list.append([]) instance_list[this_line_num_dimensions].append(pd.Series()) this_line_num_dimensions += 1 has_another_value = False has_another_dimension = True timestamps_for_dimension = [] values_for_dimension = [] char_num += 1 else: # Check if we have reached a class label if line[char_num] != "(" and target_labels: class_val = line[char_num:].strip() # if class_val not in class_val_list: # raise _TsFileParseException( # "the class value '" + class_val + "' on line " + str( # line_num + 1) + " is not valid") class_val_list.append(float(class_val)) char_num = line_len has_another_value = False has_another_dimension = False timestamps_for_dimension = [] values_for_dimension = [] else: # Read in the data contained within the next tuple if line[char_num] != "(" and not target_labels: raise _TsFileParseException( "dimension " + str(this_line_num_dimensions + 1) + " on line " + str( line_num + 1) + " does not start with a '('") char_num += 1 tuple_data = "" while char_num < line_len and line[char_num] != ")": tuple_data += line[char_num] char_num += 1 if char_num >= line_len or line[char_num] != ")": raise _TsFileParseException( "dimension " + str(this_line_num_dimensions + 1) + " on line " + str( line_num + 1) + " does not end with a ')'") # Read in any spaces immediately after the current tuple char_num += 1 while char_num < line_len and str.isspace(line[char_num]): char_num += 1 # Check if there is another value or dimension to process after this tuple if char_num >= line_len: has_another_value = False has_another_dimension = False elif line[char_num] == ",": has_another_value = True has_another_dimension = False elif line[char_num] == ":": has_another_value = False has_another_dimension = True char_num += 1 # Get the numeric value for the tuple by reading from the end of the tuple data backwards to the last comma last_comma_index = tuple_data.rfind(',') if last_comma_index == -1: raise _TsFileParseException( "dimension " + str(this_line_num_dimensions + 1) + " on line " + str( line_num + 1) + " contains a tuple that has no comma inside of it") try: value = tuple_data[last_comma_index + 1:] value = float(value) except ValueError: raise _TsFileParseException( "dimension " + str(this_line_num_dimensions + 1) + " on line " + str( line_num + 1) + " contains a tuple that does not have a valid numeric value") # Check the type of timestamp that we have timestamp = tuple_data[0: last_comma_index] try: timestamp = int(timestamp) timestamp_is_int = True timestamp_is_timestamp = False except ValueError: timestamp_is_int = False if not timestamp_is_int: try: timestamp = float(timestamp) timestamp_is_float = True timestamp_is_timestamp = False except ValueError: timestamp_is_float = False if not timestamp_is_int and not timestamp_is_float: try: timestamp = timestamp.strip() timestamp_is_timestamp = True except ValueError: timestamp_is_timestamp = False # Make sure that the timestamps in the file (not just this dimension or case) are consistent if not timestamp_is_timestamp and not timestamp_is_int and not timestamp_is_float: raise _TsFileParseException( "dimension " + str(this_line_num_dimensions + 1) + " on line " + str( line_num + 1) + " contains a tuple that has an invalid timestamp '" + timestamp + "'") if previous_timestamp_was_float is not None and previous_timestamp_was_float and not timestamp_is_float: raise _TsFileParseException( "dimension " + str(this_line_num_dimensions + 1) + " on line " + str( line_num + 1) + " contains tuples where the timestamp format is inconsistent") if previous_timestamp_was_int is not None and previous_timestamp_was_int and not timestamp_is_int: raise _TsFileParseException( "dimension " + str(this_line_num_dimensions + 1) + " on line " + str( line_num + 1) + " contains tuples where the timestamp format is inconsistent") if previous_timestamp_was_timestamp is not None and previous_timestamp_was_timestamp and not timestamp_is_timestamp: raise _TsFileParseException( "dimension " + str(this_line_num_dimensions + 1) + " on line " + str( line_num + 1) + " contains tuples where the timestamp format is inconsistent") # Store the values timestamps_for_dimension += [timestamp] values_for_dimension += [value] # If this was our first tuple then we store the type of timestamp we had if previous_timestamp_was_timestamp is None and timestamp_is_timestamp: previous_timestamp_was_timestamp = True previous_timestamp_was_int = False previous_timestamp_was_float = False if previous_timestamp_was_int is None and timestamp_is_int: previous_timestamp_was_timestamp = False previous_timestamp_was_int = True previous_timestamp_was_float = False if previous_timestamp_was_float is None and timestamp_is_float: previous_timestamp_was_timestamp = False previous_timestamp_was_int = False previous_timestamp_was_float = True # See if we should add the data for this dimension if not has_another_value: if len(instance_list) < (this_line_num_dimensions + 1): instance_list.append([]) if timestamp_is_timestamp: timestamps_for_dimension = pd.DatetimeIndex(timestamps_for_dimension) instance_list[this_line_num_dimensions].append( pd.Series(index=timestamps_for_dimension, data=values_for_dimension)) this_line_num_dimensions += 1 timestamps_for_dimension = [] values_for_dimension = [] elif has_another_value: raise _TsFileParseException( "dimension " + str(this_line_num_dimensions + 1) + " on line " + str( line_num + 1) + " ends with a ',' that is not followed by another tuple") elif has_another_dimension and target_labels: raise _TsFileParseException( "dimension " + str(this_line_num_dimensions + 1) + " on line " + str( line_num + 1) + " ends with a ':' while it should list a class value") elif has_another_dimension and not target_labels: if len(instance_list) < (this_line_num_dimensions + 1): instance_list.append([]) instance_list[this_line_num_dimensions].append(pd.Series(dtype=np.float32)) this_line_num_dimensions += 1 num_dimensions = this_line_num_dimensions # If this is the 1st line of data we have seen then note the dimensions if not has_another_value and not has_another_dimension: if num_dimensions is None: num_dimensions = this_line_num_dimensions if num_dimensions != this_line_num_dimensions: raise _TsFileParseException("line " + str( line_num + 1) + " does not have the same number of dimensions as the previous line of data") # Check that we are not expecting some more data, and if not, store that processed above if has_another_value: raise _TsFileParseException( "dimension " + str(this_line_num_dimensions + 1) + " on line " + str( line_num + 1) + " ends with a ',' that is not followed by another tuple") elif has_another_dimension and target_labels: raise _TsFileParseException( "dimension " + str(this_line_num_dimensions + 1) + " on line " + str( line_num + 1) + " ends with a ':' while it should list a class value") elif has_another_dimension and not target_labels: if len(instance_list) < (this_line_num_dimensions + 1): instance_list.append([]) instance_list[this_line_num_dimensions].append(pd.Series()) this_line_num_dimensions += 1 num_dimensions = this_line_num_dimensions # If this is the 1st line of data we have seen then note the dimensions if not has_another_value and num_dimensions != this_line_num_dimensions: raise _TsFileParseException("line " + str( line_num + 1) + " does not have the same number of dimensions as the previous line of data") # Check if we should have class values, and if so that they are contained in those listed in the metadata if target_labels and len(class_val_list) == 0: raise _TsFileParseException("the cases have no associated class values") else: dimensions = line.split(":") # If first row then note the number of dimensions (that must be the same for all cases) if is_first_case: num_dimensions = len(dimensions) if target_labels: num_dimensions -= 1 for dim in range(0, num_dimensions): instance_list.append([]) is_first_case = False # See how many dimensions that the case whose data in represented in this line has this_line_num_dimensions = len(dimensions) if target_labels: this_line_num_dimensions -= 1 # All dimensions should be included for all series, even if they are empty if this_line_num_dimensions != num_dimensions: raise _TsFileParseException("inconsistent number of dimensions. Expecting " + str( num_dimensions) + " but have read " + str(this_line_num_dimensions)) # Process the data for each dimension for dim in range(0, num_dimensions): dimension = dimensions[dim].strip() if dimension: data_series = dimension.split(",") data_series = [float(i) for i in data_series] instance_list[dim].append(pd.Series(data_series)) else: instance_list[dim].append(pd.Series()) if target_labels: class_val_list.append(float(dimensions[num_dimensions].strip())) line_num += 1 # Check that the file was not empty if line_num: # Check that the file contained both metadata and data complete_regression_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_target_labels_tag and has_data_tag complete_classification_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_class_labels_tag and has_data_tag if metadata_started and not complete_regression_meta_data and not complete_classification_meta_data: raise _TsFileParseException("metadata incomplete") elif metadata_started and not data_started: raise _TsFileParseException("file contained metadata but no data") elif metadata_started and data_started and len(instance_list) == 0: raise _TsFileParseException("file contained metadata but no data") # Create a DataFrame from the data parsed above data = pd.DataFrame(dtype=np.float32) for dim in range(0, num_dimensions): data['dim_' + str(dim)] = instance_list[dim] # Check if we should return any associated class labels separately if target_labels: if return_separate_X_and_y: return data, np.asarray(class_val_list) else: data['class_vals'] = pd.Series(class_val_list) return data else: return data else: raise _TsFileParseException("empty file") #export def get_Monash_regression_list(): return sorted([ "AustraliaRainfall", "HouseholdPowerConsumption1", "HouseholdPowerConsumption2", "BeijingPM25Quality", "BeijingPM10Quality", "Covid3Month", "LiveFuelMoistureContent", "FloodModeling1", "FloodModeling2", "FloodModeling3", "AppliancesEnergy", "BenzeneConcentration", "NewsHeadlineSentiment", "NewsTitleSentiment", "IEEEPPG", #"BIDMC32RR", "BIDMC32HR", "BIDMC32SpO2", "PPGDalia" # Cannot be downloaded ]) Monash_regression_list = get_Monash_regression_list() regression_list = Monash_regression_list TSR_datasets = regression_datasets = regression_list len(Monash_regression_list) #export def get_Monash_regression_data(dsid, path='./data/Monash', on_disk=True, mode='c', Xdtype='float32', ydtype=None, split_data=True, force_download=False, verbose=False): dsid_list = [rd for rd in Monash_regression_list if rd.lower() == dsid.lower()] assert len(dsid_list) > 0, f'{dsid} is not a Monash dataset' dsid = dsid_list[0] full_tgt_dir = Path(path)/dsid pv(f'Dataset: {dsid}', verbose) if force_download or not all([os.path.isfile(f'{path}/{dsid}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]): if dsid == 'AppliancesEnergy': id = 3902637 elif dsid == 'HouseholdPowerConsumption1': id = 3902704 elif dsid == 'HouseholdPowerConsumption2': id = 3902706 elif dsid == 'BenzeneConcentration': id = 3902673 elif dsid == 'BeijingPM25Quality': id = 3902671 elif dsid == 'BeijingPM10Quality': id = 3902667 elif dsid == 'LiveFuelMoistureContent': id = 3902716 elif dsid == 'FloodModeling1': id = 3902694 elif dsid == 'FloodModeling2': id = 3902696 elif dsid == 'FloodModeling3': id = 3902698 elif dsid == 'AustraliaRainfall': id = 3902654 elif dsid == 'PPGDalia': id = 3902728 elif dsid == 'IEEEPPG': id = 3902710 elif dsid == 'BIDMCRR' or dsid == 'BIDM32CRR': id = 3902685 elif dsid == 'BIDMCHR' or dsid == 'BIDM32CHR': id = 3902676 elif dsid == 'BIDMCSpO2' or dsid == 'BIDM32CSpO2': id = 3902688 elif dsid == 'NewsHeadlineSentiment': id = 3902718 elif dsid == 'NewsTitleSentiment': id = 3902726 elif dsid == 'Covid3Month': id = 3902690 for split in ['TRAIN', 'TEST']: url = f"https://zenodo.org/record/{id}/files/{dsid}_{split}.ts" fname = Path(path)/f'{dsid}/{dsid}_{split}.ts' pv('downloading data...', verbose) try: download_data(url, fname, c_key='archive', force_download=force_download, timeout=4) except: warnings.warn(f'Cannot download {dsid} dataset') if split_data: return None, None, None, None else: return None, None, None pv('...download complete', verbose) if split == 'TRAIN': X_train, y_train = _load_from_tsfile_to_dataframe2(fname) X_train = check_X(X_train, coerce_to_numpy=True) else: X_valid, y_valid = _load_from_tsfile_to_dataframe2(fname) X_valid = check_X(X_valid, coerce_to_numpy=True) np.save(f'{full_tgt_dir}/X_train.npy', X_train) np.save(f'{full_tgt_dir}/y_train.npy', y_train) np.save(f'{full_tgt_dir}/X_valid.npy', X_valid) np.save(f'{full_tgt_dir}/y_valid.npy', y_valid) np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid)) np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid)) del X_train, X_valid, y_train, y_valid delete_all_in_dir(full_tgt_dir, exception='.npy') pv('...numpy arrays correctly saved', verbose) mmap_mode = mode if on_disk else None X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode) y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode) X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode) y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode) if Xdtype is not None: X_train = X_train.astype(Xdtype) X_valid = X_valid.astype(Xdtype) if ydtype is not None: y_train = y_train.astype(ydtype) y_valid = y_valid.astype(ydtype) if split_data: if verbose: print('X_train:', X_train.shape) print('y_train:', y_train.shape) print('X_valid:', X_valid.shape) print('y_valid:', y_valid.shape, '\n') return X_train, y_train, X_valid, y_valid else: X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode) y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode) splits = get_predefined_splits(X_train, X_valid) if verbose: print('X :', X .shape) print('y :', y .shape) print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\n') return X, y, splits get_regression_data = get_Monash_regression_data dsid = "Covid3Month" X_train, y_train, X_valid, y_valid = get_Monash_regression_data(dsid, on_disk=False, split_data=True, force_download=True) X, y, splits = get_Monash_regression_data(dsid, on_disk=True, split_data=False, force_download=True, verbose=True) if X_train is not None: test_eq(X_train.shape, (140, 1, 84)) if X is not None: test_eq(X.shape, (201, 1, 84)) #export def get_forecasting_list(): return sorted([ "Sunspots", "Weather" ]) forecasting_time_series = get_forecasting_list() #export def get_forecasting_time_series(dsid, path='./data/forecasting/', force_download=False, verbose=True, **kwargs): dsid_list = [fd for fd in forecasting_time_series if fd.lower() == dsid.lower()] assert len(dsid_list) > 0, f'{dsid} is not a forecasting dataset' dsid = dsid_list[0] if dsid == 'Weather': full_tgt_dir = Path(path)/f'{dsid}.csv.zip' else: full_tgt_dir = Path(path)/f'{dsid}.csv' pv(f'Dataset: {dsid}', verbose) if dsid == 'Sunspots': url = "https://storage.googleapis.com/laurencemoroney-blog.appspot.com/Sunspots.csv" elif dsid == 'Weather': url = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip' try: pv("downloading data...", verbose) if force_download: try: os.remove(full_tgt_dir) except OSError: pass download_data(url, full_tgt_dir, force_download=force_download, **kwargs) pv(f"...data downloaded. Path = {full_tgt_dir}", verbose) if dsid == 'Sunspots': df = pd.read_csv(full_tgt_dir, parse_dates=['Date'], index_col=['Date']) return df['Monthly Mean Total Sunspot Number'].asfreq('1M').to_frame() elif dsid == 'Weather': # This code comes from a great Keras time-series tutorial notebook (https://www.tensorflow.org/tutorials/structured_data/time_series) df = pd.read_csv(full_tgt_dir) df = df[5::6] # slice [start:stop:step], starting from index 5 take every 6th record. date_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S') # remove error (negative wind) wv = df['wv (m/s)'] bad_wv = wv == -9999.0 wv[bad_wv] = 0.0 max_wv = df['max. wv (m/s)'] bad_max_wv = max_wv == -9999.0 max_wv[bad_max_wv] = 0.0 wv = df.pop('wv (m/s)') max_wv = df.pop('max. wv (m/s)') # Convert to radians. wd_rad = df.pop('wd (deg)')*np.pi / 180 # Calculate the wind x and y components. df['Wx'] = wv*np.cos(wd_rad) df['Wy'] = wv*np.sin(wd_rad) # Calculate the max wind x and y components. df['max Wx'] = max_wv*np.cos(wd_rad) df['max Wy'] = max_wv*np.sin(wd_rad) timestamp_s = date_time.map(datetime.timestamp) day = 24*60*60 year = (365.2425)*day df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day)) df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day)) df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year)) df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year)) df.reset_index(drop=True, inplace=True) return df else: return full_tgt_dir except: warnings.warn(f"Cannot download {dsid} dataset") return ts = get_forecasting_time_series("sunspots", force_download=True) test_eq(len(ts), 3235) ts ts = get_forecasting_time_series("weather", force_download=True) test_eq(len(ts), 70091) ts # export Monash_forecasting_list = ['m1_yearly_dataset', 'm1_quarterly_dataset', 'm1_monthly_dataset', 'm3_yearly_dataset', 'm3_quarterly_dataset', 'm3_monthly_dataset', 'm3_other_dataset', 'm4_yearly_dataset', 'm4_quarterly_dataset', 'm4_monthly_dataset', 'm4_weekly_dataset', 'm4_daily_dataset', 'm4_hourly_dataset', 'tourism_yearly_dataset', 'tourism_quarterly_dataset', 'tourism_monthly_dataset', 'nn5_daily_dataset_with_missing_values', 'nn5_daily_dataset_without_missing_values', 'nn5_weekly_dataset', 'cif_2016_dataset', 'kaggle_web_traffic_dataset_with_missing_values', 'kaggle_web_traffic_dataset_without_missing_values', 'kaggle_web_traffic_weekly_dataset', 'solar_10_minutes_dataset', 'solar_weekly_dataset', 'electricity_hourly_dataset', 'electricity_weekly_dataset', 'london_smart_meters_dataset_with_missing_values', 'london_smart_meters_dataset_without_missing_values', 'wind_farms_minutely_dataset_with_missing_values', 'wind_farms_minutely_dataset_without_missing_values', 'car_parts_dataset_with_missing_values', 'car_parts_dataset_without_missing_values', 'dominick_dataset', 'fred_md_dataset', 'traffic_hourly_dataset', 'traffic_weekly_dataset', 'pedestrian_counts_dataset', 'hospital_dataset', 'covid_deaths_dataset', 'kdd_cup_2018_dataset_with_missing_values', 'kdd_cup_2018_dataset_without_missing_values', 'weather_dataset', 'sunspot_dataset_with_missing_values', 'sunspot_dataset_without_missing_values', 'saugeenday_dataset', 'us_births_dataset', 'elecdemand_dataset', 'solar_4_seconds_dataset', 'wind_4_seconds_dataset', 'Sunspots', 'Weather'] forecasting_list = Monash_forecasting_list # export ## Original code available at: https://github.com/rakshitha123/TSForecasting # This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in # the time series forecasting research space. # The benchmark datasets are available at: https://zenodo.org/communities/forecasting. For more details, please refer to our website: # https://forecastingdata.org/ and paper: https://arxiv.org/abs/2105.06643. # Citation: # @misc{godahewa2021monash, # author="Godahewa, Rakshitha and Bergmeir, Christoph and Webb, Geoffrey I. and Hyndman, Rob J. and Montero-Manso, Pablo", # title="Monash Time Series Forecasting Archive", # howpublished ="\url{https://arxiv.org/abs/2105.06643}", # year="2021" # } # Converts the contents in a .tsf file into a dataframe and returns it along with other meta-data of the dataset: frequency, horizon, whether the dataset contains missing values and whether the series have equal lengths # # Parameters # full_file_path_and_name - complete .tsf file path # replace_missing_vals_with - a term to indicate the missing values in series in the returning dataframe # value_column_name - Any name that is preferred to have as the name of the column containing series values in the returning dataframe def convert_tsf_to_dataframe(full_file_path_and_name, replace_missing_vals_with = 'NaN', value_column_name = "series_value"): col_names = [] col_types = [] all_data = {} line_count = 0 frequency = None forecast_horizon = None contain_missing_values = None contain_equal_length = None found_data_tag = False found_data_section = False started_reading_data_section = False with open(full_file_path_and_name, 'r', encoding='cp1252') as file: for line in file: # Strip white space from start/end of line line = line.strip() if line: if line.startswith("@"): # Read meta-data if not line.startswith("@data"): line_content = line.split(" ") if line.startswith("@attribute"): if (len(line_content) != 3): # Attributes have both name and type raise TsFileParseException("Invalid meta-data specification.") col_names.append(line_content[1]) col_types.append(line_content[2]) else: if len(line_content) != 2: # Other meta-data have only values raise TsFileParseException("Invalid meta-data specification.") if line.startswith("@frequency"): frequency = line_content[1] elif line.startswith("@horizon"): forecast_horizon = int(line_content[1]) elif line.startswith("@missing"): contain_missing_values = bool(distutils.util.strtobool(line_content[1])) elif line.startswith("@equallength"): contain_equal_length = bool(distutils.util.strtobool(line_content[1])) else: if len(col_names) == 0: raise TsFileParseException("Missing attribute section. Attribute section must come before data.") found_data_tag = True elif not line.startswith("#"): if len(col_names) == 0: raise TsFileParseException("Missing attribute section. Attribute section must come before data.") elif not found_data_tag: raise TsFileParseException("Missing @data tag.") else: if not started_reading_data_section: started_reading_data_section = True found_data_section = True all_series = [] for col in col_names: all_data[col] = [] full_info = line.split(":") if len(full_info) != (len(col_names) + 1): raise TsFileParseException("Missing attributes/values in series.") series = full_info[len(full_info) - 1] series = series.split(",") if(len(series) == 0): raise TsFileParseException("A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series. Missing values should be indicated with ? symbol") numeric_series = [] for val in series: if val == "?": numeric_series.append(replace_missing_vals_with) else: numeric_series.append(float(val)) if (numeric_series.count(replace_missing_vals_with) == len(numeric_series)): raise TsFileParseException("All series values are missing. A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series.") all_series.append(pd.Series(numeric_series).array) for i in range(len(col_names)): att_val = None if col_types[i] == "numeric": att_val = int(full_info[i]) elif col_types[i] == "string": att_val = str(full_info[i]) elif col_types[i] == "date": att_val = datetime.strptime(full_info[i], '%Y-%m-%d %H-%M-%S') else: raise TsFileParseException("Invalid attribute type.") # Currently, the code supports only numeric, string and date types. Extend this as required. if(att_val == None): raise TsFileParseException("Invalid attribute value.") else: all_data[col_names[i]].append(att_val) line_count = line_count + 1 if line_count == 0: raise TsFileParseException("Empty file.") if len(col_names) == 0: raise TsFileParseException("Missing attribute section.") if not found_data_section: raise TsFileParseException("Missing series information under data section.") all_data[value_column_name] = all_series loaded_data = pd.DataFrame(all_data) return loaded_data, frequency, forecast_horizon, contain_missing_values, contain_equal_length # export def get_Monash_forecasting_data(dsid, path='./data/forecasting/', force_download=False, remove_from_disk=False, verbose=True): pv(f'Dataset: {dsid}', verbose) dsid = dsid.lower() assert dsid in Monash_forecasting_list, f'{dsid} not available in Monash_forecasting_list' if dsid == 'm1_yearly_dataset': url = 'https://zenodo.org/record/4656193/files/m1_yearly_dataset.zip' elif dsid == 'm1_quarterly_dataset': url = 'https://zenodo.org/record/4656154/files/m1_quarterly_dataset.zip' elif dsid == 'm1_monthly_dataset': url = 'https://zenodo.org/record/4656159/files/m1_monthly_dataset.zip' elif dsid == 'm3_yearly_dataset': url = 'https://zenodo.org/record/4656222/files/m3_yearly_dataset.zip' elif dsid == 'm3_quarterly_dataset': url = 'https://zenodo.org/record/4656262/files/m3_quarterly_dataset.zip' elif dsid == 'm3_monthly_dataset': url = 'https://zenodo.org/record/4656298/files/m3_monthly_dataset.zip' elif dsid == 'm3_other_dataset': url = 'https://zenodo.org/record/4656335/files/m3_other_dataset.zip' elif dsid == 'm4_yearly_dataset': url = 'https://zenodo.org/record/4656379/files/m4_yearly_dataset.zip' elif dsid == 'm4_quarterly_dataset': url = 'https://zenodo.org/record/4656410/files/m4_quarterly_dataset.zip' elif dsid == 'm4_monthly_dataset': url = 'https://zenodo.org/record/4656480/files/m4_monthly_dataset.zip' elif dsid == 'm4_weekly_dataset': url = 'https://zenodo.org/record/4656522/files/m4_weekly_dataset.zip' elif dsid == 'm4_daily_dataset': url = 'https://zenodo.org/record/4656548/files/m4_daily_dataset.zip' elif dsid == 'm4_hourly_dataset': url = 'https://zenodo.org/record/4656589/files/m4_hourly_dataset.zip' elif dsid == 'tourism_yearly_dataset': url = 'https://zenodo.org/record/4656103/files/tourism_yearly_dataset.zip' elif dsid == 'tourism_quarterly_dataset': url = 'https://zenodo.org/record/4656093/files/tourism_quarterly_dataset.zip' elif dsid == 'tourism_monthly_dataset': url = 'https://zenodo.org/record/4656096/files/tourism_monthly_dataset.zip' elif dsid == 'nn5_daily_dataset_with_missing_values': url = 'https://zenodo.org/record/4656110/files/nn5_daily_dataset_with_missing_values.zip' elif dsid == 'nn5_daily_dataset_without_missing_values': url = 'https://zenodo.org/record/4656117/files/nn5_daily_dataset_without_missing_values.zip' elif dsid == 'nn5_weekly_dataset': url = 'https://zenodo.org/record/4656125/files/nn5_weekly_dataset.zip' elif dsid == 'cif_2016_dataset': url = 'https://zenodo.org/record/4656042/files/cif_2016_dataset.zip' elif dsid == 'kaggle_web_traffic_dataset_with_missing_values': url = 'https://zenodo.org/record/4656080/files/kaggle_web_traffic_dataset_with_missing_values.zip' elif dsid == 'kaggle_web_traffic_dataset_without_missing_values': url = 'https://zenodo.org/record/4656075/files/kaggle_web_traffic_dataset_without_missing_values.zip' elif dsid == 'kaggle_web_traffic_weekly': url = 'https://zenodo.org/record/4656664/files/kaggle_web_traffic_weekly_dataset.zip' elif dsid == 'solar_10_minutes_dataset': url = 'https://zenodo.org/record/4656144/files/solar_10_minutes_dataset.zip' elif dsid == 'solar_weekly_dataset': url = 'https://zenodo.org/record/4656151/files/solar_weekly_dataset.zip' elif dsid == 'electricity_hourly_dataset': url = 'https://zenodo.org/record/4656140/files/electricity_hourly_dataset.zip' elif dsid == 'electricity_weekly_dataset': url = 'https://zenodo.org/record/4656141/files/electricity_weekly_dataset.zip' elif dsid == 'london_smart_meters_dataset_with_missing_values': url = 'https://zenodo.org/record/4656072/files/london_smart_meters_dataset_with_missing_values.zip' elif dsid == 'london_smart_meters_dataset_without_missing_values': url = 'https://zenodo.org/record/4656091/files/london_smart_meters_dataset_without_missing_values.zip' elif dsid == 'wind_farms_minutely_dataset_with_missing_values': url = 'https://zenodo.org/record/4654909/files/wind_farms_minutely_dataset_with_missing_values.zip' elif dsid == 'wind_farms_minutely_dataset_without_missing_values': url = 'https://zenodo.org/record/4654858/files/wind_farms_minutely_dataset_without_missing_values.zip' elif dsid == 'car_parts_dataset_with_missing_values': url = 'https://zenodo.org/record/4656022/files/car_parts_dataset_with_missing_values.zip' elif dsid == 'car_parts_dataset_without_missing_values': url = 'https://zenodo.org/record/4656021/files/car_parts_dataset_without_missing_values.zip' elif dsid == 'dominick_dataset': url = 'https://zenodo.org/record/4654802/files/dominick_dataset.zip' elif dsid == 'fred_md_dataset': url = 'https://zenodo.org/record/4654833/files/fred_md_dataset.zip' elif dsid == 'traffic_hourly_dataset': url = 'https://zenodo.org/record/4656132/files/traffic_hourly_dataset.zip' elif dsid == 'traffic_weekly_dataset': url = 'https://zenodo.org/record/4656135/files/traffic_weekly_dataset.zip' elif dsid == 'pedestrian_counts_dataset': url = 'https://zenodo.org/record/4656626/files/pedestrian_counts_dataset.zip' elif dsid == 'hospital_dataset': url = 'https://zenodo.org/record/4656014/files/hospital_dataset.zip' elif dsid == 'covid_deaths_dataset': url = 'https://zenodo.org/record/4656009/files/covid_deaths_dataset.zip' elif dsid == 'kdd_cup_2018_dataset_with_missing_values': url = 'https://zenodo.org/record/4656719/files/kdd_cup_2018_dataset_with_missing_values.zip' elif dsid == 'kdd_cup_2018_dataset_without_missing_values': url = 'https://zenodo.org/record/4656756/files/kdd_cup_2018_dataset_without_missing_values.zip' elif dsid == 'weather_dataset': url = 'https://zenodo.org/record/4654822/files/weather_dataset.zip' elif dsid == 'sunspot_dataset_with_missing_values': url = 'https://zenodo.org/record/4654773/files/sunspot_dataset_with_missing_values.zip' elif dsid == 'sunspot_dataset_without_missing_values': url = 'https://zenodo.org/record/4654722/files/sunspot_dataset_without_missing_values.zip' elif dsid == 'saugeenday_dataset': url = 'https://zenodo.org/record/4656058/files/saugeenday_dataset.zip' elif dsid == 'us_births_dataset': url = 'https://zenodo.org/record/4656049/files/us_births_dataset.zip' elif dsid == 'elecdemand_dataset': url = 'https://zenodo.org/record/4656069/files/elecdemand_dataset.zip' elif dsid == 'solar_4_seconds_dataset': url = 'https://zenodo.org/record/4656027/files/solar_4_seconds_dataset.zip' elif dsid == 'wind_4_seconds_dataset': url = 'https://zenodo.org/record/4656032/files/wind_4_seconds_dataset.zip' path = Path(path) full_path = path/f'{dsid}.tsf' if not full_path.exists() or force_download: decompress_from_url(url, target_dir=path, verbose=verbose) pv("converting dataframe to numpy array...", verbose) data, frequency, forecast_horizon, contain_missing_values, contain_equal_length = convert_tsf_to_dataframe(full_path) X = to3d(stack_pad(data['series_value'])) pv("...dataframe converted to numpy array", verbose) pv(f'\nX.shape: {X.shape}', verbose) pv(f'freq: {frequency}', verbose) pv(f'forecast_horizon: {forecast_horizon}', verbose) pv(f'contain_missing_values: {contain_missing_values}', verbose) pv(f'contain_equal_length: {contain_equal_length}', verbose=verbose) if remove_from_disk: os.remove(full_path) return X get_forecasting_data = get_Monash_forecasting_data dsid = 'm1_yearly_dataset' X = get_Monash_forecasting_data(dsid, force_download=True, remove_from_disk=True) test_eq(X.shape, (181, 1, 58)) #hide from tsai.imports import create_scripts from tsai.export import get_nb_name nb_name = get_nb_name() create_scripts(nb_name);
_____no_output_____
Apache-2.0
nbs/012_data.external.ipynb
clancy0614/tsai
Models using 3D convolutions> This module focuses on preparing the data of the UCF101 dataset to be used with the core functions.Refs.[understanding-1d-and-3d-convolution](https://towardsdatascience.com/understanding-1d-and-3d-convolution-neural-network-keras-9d8f76e29610)
#export import torch import torch.nn as nn import torchvision # used to download the model import torch.nn.functional as F from torch.autograd import Variable import math #export def conv3x3x3(in_channels, out_channels, stride=1): # 3x3x3 convolution with padding return nn.Conv3d( in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False) def downsample_basic_block(x, planes, stride): out = F.avg_pool3d(x, kernel_size=1, stride=stride) zero_pads = torch.Tensor( out.size(0), planes - out.size(1), out.size(2), out.size(3), out.size(4)).zero_() if isinstance(out.data, torch.cuda.FloatTensor): zero_pads = zero_pads.cuda() out = Variable(torch.cat([out.data, zero_pads], dim=1)) return out #export class BasicBlock(nn.Module): expansion = 1 def __init__(self, in_channels, channels, stride=1, downsample=None): super(BasicBlock, self).__init__() self.conv1 = conv3x3x3(in_channels, channels, stride) self.bn1 = nn.BatchNorm3d(channels) self.relu = nn.ReLU(inplace=True) self.conv2 = conv3x3x3(channels, channels) self.bn2 = nn.BatchNorm3d(channels) self.downsample = downsample self.stride = stride def forward(self, x): residual = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) if self.downsample is not None: residual = self.downsample(x) out += residual out = self.relu(out) return out #export class Bottleneck(nn.Module): expansion = 4 def __init__(self, inplanes, planes, stride=1, downsample=None): super(Bottleneck, self).__init__() self.conv1 = nn.Conv3d(inplanes, planes, kernel_size=1, bias=False) self.bn1 = nn.BatchNorm3d(planes) self.conv2 = nn.Conv3d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False) self.bn2 = nn.BatchNorm3d(planes) self.conv3 = nn.Conv3d(planes, planes * 4, kernel_size=1, bias=False) self.bn3 = nn.BatchNorm3d(planes * 4) self.relu = nn.ReLU(inplace=True) self.downsample = downsample self.stride = stride def forward(self, x): residual = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) out = self.relu(out) out = self.conv3(out) out = self.bn3(out) if self.downsample is not None: residual = self.downsample(x) out += residual out = self.relu(out) return out #export class ResNet(nn.Module): def __init__(self, block, layers, sample_size, sample_duration, shortcut_type='B', num_classes=400): self.inplanes = 64 super(ResNet, self).__init__() self.conv1 = nn.Conv3d(3, 64, kernel_size=7, stride=(1, 2, 2), padding=(3, 3, 3), bias=False) self.bn1 = nn.BatchNorm3d(64) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool3d(kernel_size=(3, 3, 3), stride=2, padding=1) self.layer1 = self._make_layer(block, 64, layers[0], shortcut_type) self.layer2 = self._make_layer(block, 128, layers[1], shortcut_type, stride=2) self.layer3 = self._make_layer(block, 256, layers[2], shortcut_type, stride=2) self.layer4 = self._make_layer(block, 512, layers[3], shortcut_type, stride=2) last_duration = int(math.ceil(sample_duration / 16)) last_size = int(math.ceil(sample_size / 32)) self.avgpool = nn.AvgPool3d((last_duration, last_size, last_size), stride=1) self.fc = nn.Linear(512 * block.expansion, num_classes) for m in self.modules(): if isinstance(m, nn.Conv3d): m.weight = nn.init.kaiming_normal_(m.weight, mode='fan_out') elif isinstance(m, nn.BatchNorm3d): m.weight.data.fill_(1) m.bias.data.zero_() def _make_layer(self, block, planes, blocks, shortcut_type, stride=1): downsample = None if stride != 1 or self.inplanes != planes * block.expansion: if shortcut_type == 'A': downsample = partial( downsample_basic_block, planes=planes * block.expansion, stride=stride) else: downsample = nn.Sequential( nn.Conv3d(self.inplanes, planes * block.expansion, kernel_size=1, stride=stride, bias=False), nn.BatchNorm3d(planes * block.expansion)) layers = [] layers.append(block(self.inplanes, planes, stride, downsample)) self.inplanes = planes * block.expansion for i in range(1, blocks): layers.append(block(self.inplanes, planes)) return nn.Sequential(*layers) def forward(self, x): # only when using fastai x = x.permute(0,2,1,3,4) with torch.no_grad(): h = self.conv1(x) h = self.bn1(h) h = self.relu(h) h = self.maxpool(h) h = self.layer1(h) h = self.layer2(h) h = self.layer3(h) h = self.layer4[0](h) # h = self.layer4(h) h = self.avgpool(h) h = h.view(h.size(0), -1) h = self.fc(h) return h #export class ResNet50_3D(nn.Module): def __init__(self, num_classes, **kwargs): super(ResNet50_3D, self).__init__() if 'model_pretrained' in kwargs.keys(): print(f"ResNet50_3D is loading pretrained ResNet50 from {kwargs['model_pretrained']}") pretrained_resnet50 = torch.load('./model-pretrained/resnet-50-kinetics.pth', map_location=torch.device("cuda" if torch.cuda.is_available() else "cpu")) kwargs.pop('model_pretrained', None) resnet = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs) keys = [k for k,v in pretrained_resnet50['state_dict'].items()] pretrained_state_dict = {k[7:]: v.cpu() for k, v in pretrained_resnet50['state_dict'].items()} resnet.load_state_dict(pretrained_state_dict) else: resnet = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs) # chenage the last layer to match number of classes resnet.fc = nn.Linear(resnet.fc.weight.shape[1], num_classes) # self.feature_extractor = nn.Sequential(*list(resnet.children())[:-1]) self.feature_extractor = resnet # self.final = nn.Sequential( # nn.Linear(resnet.fc.in_features, num_classes), # ) def forward(self, x): # The input x will now be size [batch_size, c, seq_len, h, w]. # This is what I might get..Sequence (bs, 4, 3, 224, 224) #batch_size, c, h, w = x.shape #x = x.view(batch_size, c, h, w) x = self.feature_extractor(x) #x = x.view(batch_size, -1) # x = self.final(x) #x = x.view(batch_size, -1) return x #export def resnet10(**kwargs): """Constructs a ResNet-18 model. """ model = ResNet(BasicBlock, [1, 1, 1, 1], **kwargs) return model def resnet18(**kwargs): """Constructs a ResNet-18 model. """ model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs) return model def resnet34(**kwargs): """Constructs a ResNet-34 model. """ model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs) return model def resnet50(**kwargs): """Constructs a ResNet-50 model. """ print('function resnet50') model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs) return model def resnet101(**kwargs): """Constructs a ResNet-101 model. """ model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs) return model def resnet152(**kwargs): """Constructs a ResNet-101 model. """ model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs) return model def resnet200(**kwargs): """Constructs a ResNet-101 model. """ model = ResNet(Bottleneck, [3, 24, 36, 3], **kwargs) return model model = ResNet50_3D(num_classes=101, sample_size=224, sample_duration=16, model_pretrained='./model-pretrained/resnet-50-kinetics.pth') model model = resnet10(sample_size=224, sample_duration=16) model #hide from nbdev.export import * notebook2script()
Converted 01_dataset_ucf101.ipynb. Converted 02_avi.ipynb. Converted 04_data_augmentation.ipynb. Converted 05_models.ipynb. Converted 06_models-resnet_3d.ipynb. Converted 07_utils.ipynb. Converted 10_run-baseline.ipynb. Converted 11_run-sequence-convlstm.ipynb. Converted 12_run-sequence-3d.ipynb. Converted 14_fastai_sequence.ipynb. Converted index.ipynb.
Apache-2.0
06_models-resnet_3d.ipynb
andreamunafo/actions-in-videos
OUTDATED, the examples moved to the gallery See https://empymod.github.io/emg3d-gallery---- 3D with tri-axial anisotropy comparison between `emg3d` and `SimPEG``SimPEG` is an open source python package for simulation and gradient based parameter estimation in geophysical applications, see https://simpeg.xyz. We can use `emg3d` as a solver for `SimPEG`, and compare it with the forward solver `Pardiso`. Requires- **emg3d >= 0.9.0**- ``discretize``, ``SimPEG``, ``pymatsolver``- ``numpy``, ``scipy``, ``numba``, ``matplotlib``Note, in order to use the `Pardiso`-solver `pymatsolver` has to be installed via `conda`, not via `pip`!
import time import emg3d import discretize import numpy as np import SimPEG, pymatsolver from SimPEG.EM import FDEM from SimPEG import Mesh, Maps from SimPEG.Survey import Data import matplotlib.pyplot as plt from timeit import default_timer from contextlib import contextmanager from datetime import datetime, timedelta from pymatsolver import Pardiso as Solver from matplotlib.colors import LogNorm, SymLogNorm %load_ext memory_profiler # Style adjustments %matplotlib notebook plt.style.use('ggplot')
_____no_output_____
Apache-2.0
1c_3D_triaxial_SimPEG.ipynb
empymod/emg3d-examples
Model and survey parameters
# Depths (0 is sea-surface) water_depth = 1000 target_x = np.r_[-500, 500] target_y = target_x target_z = -water_depth + np.r_[-400, -100] # Resistivities res_air = 2e8 res_sea = 0.33 res_back = [1., 2., 3.] # Background in x-, y-, and z-directions res_target = 100. freq = 1.0 src = [-100, 100, 0, 0, -900, -900]
_____no_output_____
Apache-2.0
1c_3D_triaxial_SimPEG.ipynb
empymod/emg3d-examples
Mesh and source-field
# skin depth skin_depth = 503/np.sqrt(res_back[0]/freq) print(f"\nThe skin_depth is {skin_depth} m.\n") cs = 100 # 100 m min_width of cells pf = 1.15 # Padding factor x- and y-directions pfz = 1.35 # z-direction npadx = 12 # Nr of padding in x- and y-directions npadz = 9 # z-direction domain_x = 4000 # x- and y-domain domain_z = - target_z[0] # z-domain # Create mesh mesh = Mesh.TensorMesh( [[(cs, npadx, -pf), (cs, int(domain_x/cs)), (cs, npadx, pf)], [(cs, npadx, -pf), (cs, int(domain_x/cs)), (cs, npadx, pf)], [(cs, npadz, -pfz), (cs, int(domain_z/cs)), (cs, npadz, pfz)]] ) # Center mesh mesh.x0 = np.r_[-mesh.hx.sum()/2, -mesh.hy.sum()/2, -mesh.hz[:-npadz].sum()] # Create the source field for this mesh and given frequency sfield = emg3d.utils.get_source_field(mesh, src, freq, strength=0) # We take the receiver locations at the actual CCx-locations rec_x = mesh.vectorCCx[12:-12] print(f"Receiver locations:\n{rec_x}\n") mesh
The skin_depth is 503.0 m. Receiver locations: [-1950. -1850. -1750. -1650. -1550. -1450. -1350. -1250. -1150. -1050. -950. -850. -750. -650. -550. -450. -350. -250. -150. -50. 50. 150. 250. 350. 450. 550. 650. 750. 850. 950. 1050. 1150. 1250. 1350. 1450. 1550. 1650. 1750. 1850. 1950.]
Apache-2.0
1c_3D_triaxial_SimPEG.ipynb
empymod/emg3d-examples
Create model
# Layered_background res_x = res_air*np.ones(mesh.nC) res_x[mesh.gridCC[:, 2] <= 0] = res_sea res_y = res_x.copy() res_z = res_x.copy() res_x[mesh.gridCC[:, 2] <= -water_depth] = res_back[0] res_y[mesh.gridCC[:, 2] <= -water_depth] = res_back[1] res_z[mesh.gridCC[:, 2] <= -water_depth] = res_back[2] res_x_bg = res_x.copy() res_y_bg = res_y.copy() res_z_bg = res_z.copy() # Include the target target_inds = ( (mesh.gridCC[:, 0] >= target_x[0]) & (mesh.gridCC[:, 0] <= target_x[1]) & (mesh.gridCC[:, 1] >= target_y[0]) & (mesh.gridCC[:, 1] <= target_y[1]) & (mesh.gridCC[:, 2] >= target_z[0]) & (mesh.gridCC[:, 2] <= target_z[1]) ) res_x[target_inds] = res_target res_y[target_inds] = res_target res_z[target_inds] = res_target # Create emg3d-models for given frequency pmodel = emg3d.utils.Model(mesh, res_x, res_y, res_z) pmodel_bg = emg3d.utils.Model(mesh, res_x_bg, res_y_bg, res_z_bg) # Plot a slice mesh.plot_3d_slicer(pmodel.res_x, zslice=-1100, clim=[0, 2], xlim=(-4000, 4000), ylim=(-4000, 4000), zlim=(-2000, 500))
_____no_output_____
Apache-2.0
1c_3D_triaxial_SimPEG.ipynb
empymod/emg3d-examples
Calculate `emg3d`
%memit em3_tg = emg3d.solver.solver(mesh, pmodel, sfield, verb=3, nu_pre=0, semicoarsening=True) %memit em3_bg = emg3d.solver.solver(mesh, pmodel_bg, sfield, verb=3, nu_pre=0, semicoarsening=True)
:: emg3d START :: 21:03:43 :: MG-cycle : 'F' sslsolver : False semicoarsening : True [1 2 3] tol : 1e-06 linerelaxation : False [0] maxit : 50 nu_{i,1,c,2} : 0, 0, 1, 2 verb : 3 Original grid : 64 x 64 x 32 => 131,072 cells Coarsest grid : 2 x 2 x 2 => 8 cells Coarsest level : 5 ; 5 ; 4 [hh:mm:ss] rel. error [abs. error, last/prev] l s h_ 2h_ \ / 4h_ \ /\ / 8h_ \ /\ / \ / 16h_ \ /\ / \ / \ / 32h_ \/\/ \/ \/ \/ [21:03:44] 5.250e-02 after 1 F-cycles [2.931e-07, 0.052] 0 1 [21:03:45] 6.468e-03 after 2 F-cycles [3.611e-08, 0.123] 0 2 [21:03:45] 8.049e-04 after 3 F-cycles [4.494e-09, 0.124] 0 3 [21:03:45] 1.435e-04 after 4 F-cycles [8.012e-10, 0.178] 0 1 [21:03:46] 4.756e-05 after 5 F-cycles [2.655e-10, 0.331] 0 2 [21:03:46] 5.863e-06 after 6 F-cycles [3.274e-11, 0.123] 0 3 [21:03:47] 1.947e-06 after 7 F-cycles [1.087e-11, 0.332] 0 1 [21:03:47] 1.068e-06 after 8 F-cycles [5.965e-12, 0.549] 0 2 [21:03:48] 3.441e-07 after 9 F-cycles [1.921e-12, 0.322] 0 3 > CONVERGED > MG cycles : 9 > Final rel. error : 3.441e-07 :: emg3d END :: 21:03:48 :: runtime = 0:00:04 peak memory: 337.88 MiB, increment: 26.70 MiB
Apache-2.0
1c_3D_triaxial_SimPEG.ipynb
empymod/emg3d-examples
Calculate `SimPEG`
# Set up the PDE prob = FDEM.Problem3D_e(mesh, sigmaMap=Maps.IdentityMap(mesh), Solver=Solver) # Set up the receivers rx_locs = Mesh.utils.ndgrid([rec_x, np.r_[0], np.r_[-water_depth]]) rx_list = [ FDEM.Rx.Point_e(orientation='x', component="real", locs=rx_locs), FDEM.Rx.Point_e(orientation='x', component="imag", locs=rx_locs) ] # We use the emg3d-source-vector, to ensure we use the same in both cases src_sp = FDEM.Src.RawVec_e(rx_list, s_e=sfield.vector, freq=freq) src_list = [src_sp] survey = FDEM.Survey(src_list) # Create the simulation prob.pair(survey) @contextmanager def ctimeit(before=''): """Print time used by commands run within the context manager.""" t0 = default_timer() yield t1 = default_timer() - t0 print(f"{before}{timedelta(seconds=np.round(t1))}") with ctimeit("SimPEG runtime: "): %memit spg_tg_dobs = survey.dpred(np.vstack([1./res_x, 1./res_y, 1./res_z]).T) spg_tg = Data(survey, dobs=spg_tg_dobs) with ctimeit("SimPEG runtime: "): %memit spg_bg_dobs = survey.dpred(np.vstack([1./res_x_bg, 1./res_y_bg, 1./res_z_bg]).T) spg_bg = Data(survey, dobs=spg_bg_dobs)
peak memory: 10460.16 MiB, increment: 9731.63 MiB SimPEG runtime: 0:03:53
Apache-2.0
1c_3D_triaxial_SimPEG.ipynb
empymod/emg3d-examples
Plot result
ix1, ix2 = 12, 12 iy = 32 iz = 13 mesh.vectorCCx[ix1], mesh.vectorCCx[-ix2-1], mesh.vectorNy[iy], mesh.vectorNz[iz] plt.figure(figsize=(9, 6)) plt.subplot(221) plt.title('|Real(response)|') plt.semilogy(rec_x/1e3, np.abs(em3_bg.fx[ix1:-ix2, iy, iz].real)) plt.semilogy(rec_x/1e3, np.abs(em3_tg.fx[ix1:-ix2, iy, iz].real)) plt.semilogy(rec_x/1e3, np.abs(spg_bg[src_sp, rx_list[0]]), 'C4--') plt.semilogy(rec_x/1e3, np.abs(spg_tg[src_sp, rx_list[0]]), 'C5--') plt.xlabel('Offset (km)') plt.ylabel('$E_x$ (V/m)') plt.subplot(223) plt.title('|Imag(response)|') plt.semilogy(rec_x/1e3, np.abs(em3_bg.fx[ix1:-ix2, iy, iz].imag), label='emg3d BG') plt.semilogy(rec_x/1e3, np.abs(em3_tg.fx[ix1:-ix2, iy, iz].imag), label='emg3d target') plt.semilogy(rec_x/1e3, np.abs(spg_bg[src_sp, rx_list[1]]), 'C4--', label='SimPEG BG') plt.semilogy(rec_x/1e3, np.abs(spg_tg[src_sp, rx_list[1]]), 'C5--', label='SimPEG target') plt.xlabel('Offset (km)') plt.ylabel('$E_x$ (V/m)') plt.legend() plt.subplot(222) plt.title('Relative error Real') plt.semilogy(rec_x/1e3, 100*np.abs((spg_bg[src_sp, rx_list[0]]-em3_bg.fx[ix1:-ix2, iy, iz].real)/ em3_bg.fx[ix1:-ix2, iy, iz].real), label='BG') plt.semilogy(rec_x/1e3, 100*np.abs((spg_tg[src_sp, rx_list[0]]-em3_tg.fx[ix1:-ix2, iy, iz].real)/ em3_tg.fx[ix1:-ix2, iy, iz].real), label='target') plt.xlabel('Offset (km)') plt.ylabel('Rel. Error (%)') plt.legend() plt.subplot(224) plt.title('Relative error (%) Imag') plt.semilogy(rec_x/1e3, 100*np.abs((spg_bg[src_sp, rx_list[1]]-em3_bg.fx[ix1:-ix2, iy, iz].imag)/ em3_bg.fx[ix1:-ix2, iy, iz].imag), label='BG') plt.semilogy(rec_x/1e3, 100*np.abs((spg_tg[src_sp, rx_list[1]]-em3_tg.fx[ix1:-ix2, iy, iz].imag)/ em3_tg.fx[ix1:-ix2, iy, iz].imag), label='target') plt.xlabel('Offset (km)') plt.ylabel('Rel. Error (%)') plt.legend() plt.tight_layout() plt.show() emg3d.Report([discretize, SimPEG, pymatsolver])
_____no_output_____
Apache-2.0
1c_3D_triaxial_SimPEG.ipynb
empymod/emg3d-examples
Portfolio OptimizationThe portfolio optimization problem is a combinatorial optimization problem that seeks the optimal combination of assets based on the balance between risk and return. Cost FunctionThe cost function for solving the portfolio optimization problem is $$E = -\sum \mu_i q_i + \gamma \sum \delta_{i,j}q_i q_j$$ The 1st term shows the return of the assets and the 2nd as risk we estimate. ExampleNow, let's choose two of the six assets and find the optimal combination.
import numpy as np from blueqat import vqe from blueqat.pauli import I, X, Y, Z from blueqat.pauli import from_qubo from blueqat.pauli import qubo_bit as q from blueqat import Circuit
_____no_output_____
Apache-2.0
tutorial/318_portfolio.ipynb
Blueqat/blueqat-tutorials
Use the following as return data
asset_return = np.diag([-0.026,-0.031,-0.007,-0.022,-0.010,-0.055]) print(asset_return)
[[-0.026 0. 0. 0. 0. 0. ] [ 0. -0.031 0. 0. 0. 0. ] [ 0. 0. -0.007 0. 0. 0. ] [ 0. 0. 0. -0.022 0. 0. ] [ 0. 0. 0. 0. -0.01 0. ] [ 0. 0. 0. 0. 0. -0.055]]
Apache-2.0
tutorial/318_portfolio.ipynb
Blueqat/blueqat-tutorials
Use the following as risk data
asset_risk = [[0,0.0015,0.0012,0.0018,0.0022,0.0012],[0,0,0.0017,0.0022,0.0005,0.0019],[0,0,0,0.0040,0.0032,0.0024],[0,0,0,0,0.0012,0.0076],[0,0,0,0,0,0.0021],[0,0,0,0,0,0]] np.asarray(asset_risk)
_____no_output_____
Apache-2.0
tutorial/318_portfolio.ipynb
Blueqat/blueqat-tutorials
It is then converted to Hamiltonian and calculated. In addition, this time there is the constraint of selecting two out of six assets, which is implemented using XYmixer.
#convert qubo to pauli qubo = asset_return + np.asarray(asset_risk)*0.5 hamiltonian = from_qubo(qubo) init = Circuit(6).x[0,1] mixer = I()*0 for i in range(5): for j in range(i+1, 6): mixer += (X[i]*X[j] + Y[i]*Y[j])*0.5 step = 1 result = vqe.Vqe(vqe.QaoaAnsatz(hamiltonian, step, init, mixer)).run() print(result.most_common(12)) result.circuit.run(backend="draw")
_____no_output_____
Apache-2.0
tutorial/318_portfolio.ipynb
Blueqat/blueqat-tutorials
Доверительные интервалы на основе bootstrap
import numpy as np import pandas as pd %pylab inline
Populating the interactive namespace from numpy and matplotlib
MIT
StatsForDataAnalysis/stat.bootstrap_intervals.ipynb
alexsubota/PythonScripts
Загрузка данных Время ремонта телекоммуникаций Verizon — основная региональная телекоммуникационная компания (Incumbent Local Exchange Carrier, ILEC) в западной части США. В связи с этим данная компания обязана предоставлять сервис ремонта телекоммуникационного оборудования не только для своих клиентов, но и для клиентов других локальных телекоммуникационых компаний (Competing Local Exchange Carriers, CLEC). При этом в случаях, когда время ремонта оборудования для клиентов других компаний существенно выше, чем для собственных, Verizon может быть оштрафована.
data = pd.read_csv('verizon.txt', sep='\t') data.shape data.head() data.Group.value_counts() pylab.figure(figsize(12, 5)) pylab.subplot(1,2,1) pylab.hist(data[data.Group == 'ILEC'].Time, bins = 20, color = 'b', range = (0, 100), label = 'ILEC') pylab.legend() pylab.subplot(1,2,2) pylab.hist(data[data.Group == 'CLEC'].Time, bins = 20, color = 'r', range = (0, 100), label = 'CLEC') pylab.legend() pylab.show()
_____no_output_____
MIT
StatsForDataAnalysis/stat.bootstrap_intervals.ipynb
alexsubota/PythonScripts
Bootstrap
def get_bootstrap_samples(data, n_samples): indices = np.random.randint(0, len(data), (n_samples, len(data))) samples = data[indices] return samples def stat_intervals(stat, alpha): boundaries = np.percentile(stat, [100 * alpha / 2., 100 * (1 - alpha / 2.)]) return boundaries
_____no_output_____
MIT
StatsForDataAnalysis/stat.bootstrap_intervals.ipynb
alexsubota/PythonScripts
Интервальная оценка медианы
ilec_time = data[data.Group == 'ILEC'].Time.values clec_time = data[data.Group == 'CLEC'].Time.values np.random.seed(0) ilec_median_scores = map(np.median, get_bootstrap_samples(ilec_time, 1000)) clec_median_scores = map(np.median, get_bootstrap_samples(clec_time, 1000)) print "95% confidence interval for the ILEC median repair time:", stat_intervals(ilec_median_scores, 0.05) print "95% confidence interval for the CLEC median repair time:", stat_intervals(clec_median_scores, 0.05)
_____no_output_____
MIT
StatsForDataAnalysis/stat.bootstrap_intervals.ipynb
alexsubota/PythonScripts
Точечная оценка разности медиан
print "difference between medians:", np.median(clec_time) - np.median(ilec_time)
_____no_output_____
MIT
StatsForDataAnalysis/stat.bootstrap_intervals.ipynb
alexsubota/PythonScripts
Интервальная оценка разности медиан
delta_median_scores = map(lambda x: x[1] - x[0], zip(ilec_median_scores, clec_median_scores)) print "95% confidence interval for the difference between medians", stat_intervals(delta_median_scores, 0.05)
_____no_output_____
MIT
StatsForDataAnalysis/stat.bootstrap_intervals.ipynb
alexsubota/PythonScripts
Imports
from geopy.geocoders import Nominatim from geopy.distance import distance from pprint import pprint import pandas as pd import random from typing import List, Tuple from dotenv import dotenv_values random.seed(123) config = dotenv_values(".env")
_____no_output_____
MIT
a_star.ipynb
Google-Developer-Student-Club-KPI/a-star
Cities
country = "Ukraine" cities = ["Lviv", "Chernihiv", "Dnipropetrovs'k", "Uzhgorod", "Kharkiv", "Odesa", "Poltava", "Kiev", "Zhytomyr", "Khmelnytskyi", "Vinnytsia","Cherkasy", "Zaporizhia", "Ternopil", "Sumy"]
_____no_output_____
MIT
a_star.ipynb
Google-Developer-Student-Club-KPI/a-star
1) Get Distinct Distance using geopy API
def get_distinct_distances(list:cities, str: country) -> pd.DataFrame: df = pd.DataFrame(index = cities, columns= cities) geolocator = Nominatim(user_agent=config["USER_AGENT"], timeout = 10000) coordinates = dict() for city in cities: location = geolocator.geocode(city + " " + country) coordinates[city] = (location.latitude, location.longitude) for origin in range(len(cities)): for destination in range(origin, len(cities)): dist = distance(coordinates[cities[origin]], coordinates[cities[destination]]).km df[cities[origin]][cities[destination]] = dist df[cities[destination]][cities[origin]] = dist return df, coordinates df_distinct, coordinates = get_distinct_distances(cities, country) df_distinct.head(15)
_____no_output_____
MIT
a_star.ipynb
Google-Developer-Student-Club-KPI/a-star
Download file to local
df_distinct.to_csv("data/direct_distances.csv")
_____no_output_____
MIT
a_star.ipynb
Google-Developer-Student-Club-KPI/a-star
2) Get route distance using Openrouteservice API
import openrouteservice from pprint import pprint def get_route_dataframe(coordinates: dict)->pd.DataFrame: client = openrouteservice.Client(key=config['API_KEY']) cities = list(coordinates.keys()) df = pd.DataFrame(index = cities, columns= cities) for origin in range(len(coordinates.keys())): for destination in range(origin, len(coordinates.keys())): if origin != destination: l2 = ((coordinates[cities[origin]][1], coordinates[cities[origin]][0]), (coordinates[cities[destination]][1], coordinates[cities[destination]][0])) distance = client.directions(l2, units="km", radiuses=-1)['routes'][0]['segments'][0]['distance'] df[cities[origin]][cities[destination]] = df[cities[destination]][cities[origin]] = distance else: df[cities[origin]][cities[destination]] = df[cities[destination]][cities[origin]] = 0 return df import warnings warnings.filterwarnings('ignore') df_routes = get_route_dataframe(coordinates) df_routes.head(15)
_____no_output_____
MIT
a_star.ipynb
Google-Developer-Student-Club-KPI/a-star
Download file to local
df_routes.to_csv("data/route_distances.csv")
_____no_output_____
MIT
a_star.ipynb
Google-Developer-Student-Club-KPI/a-star
A* Algorithm
class AStar(): def __init__(self, cities: list[str], country: str, distances: pd.DataFrame, heuristics: pd.DataFrame): self.cities = cities self.country = country self.distances = distances self.heuristics = heuristics def generate_map(self, low, high) -> dict[str, list[str]]: from networkx.generators.degree_seq import random_degree_sequence_graph import numpy as np import networkx as nx degrees = np.random.randint(low, high, len(self.cities)) while not nx.is_graphical(degrees): degrees = np.random.randint(low, high, len(self.cities)) graph = random_degree_sequence_graph(degrees) graph = nx.relabel.relabel_nodes(graph, mapping=dict(zip(range(15), self.cities))) graph = nx.to_dict_of_lists(graph) return graph def restore_path(self, current, camefrom: dict) -> list[str]: path = [current] while current in camefrom.keys(): current = camefrom[current] path.insert(0, current) return path def run(self, origin:str, destination:str, country:str) -> Tuple[List[str], float]: gscore = dict().fromkeys(cities, float("inf")) gscore[origin] = 0 fscore = dict().fromkeys(cities, float("inf")) fscore[origin] = self.heuristics[origin][destination] camefrom = dict() openset = [] openset.append(origin) openset = list(sorted(openset, key = lambda x: fscore[x])) closed = [] while openset: current_city = openset.pop(0) closed.append(current_city) if current_city == destination: return self.restore_path(current_city, camefrom), gscore[current_city] for neighbour in country[current_city]: if neighbour not in closed: tentative_gScore = gscore[current_city] + self.distances[current_city][neighbour] if tentative_gScore < gscore[neighbour]: camefrom[neighbour] = current_city gscore[neighbour] = tentative_gScore fscore[neighbour] = gscore[neighbour] + self.heuristics[neighbour][destination] if neighbour not in openset: openset.append(neighbour) openset = list(sorted(openset, key = lambda x: fscore[x])) return (None, 0) low = 3 high = 5 distances = pd.read_csv("data/route_distances.csv", index_col = 0) heuristic = pd.read_csv("data/direct_distances.csv",index_col = 0) a_star = AStar(cities, country, distances, heuristic) pprint(a_star.generate_map(low,high))
{'Cherkasy': ['Chernihiv', 'Odesa', 'Zhytomyr', 'Ternopil'], 'Chernihiv': ["Dnipropetrovs'k", 'Poltava', 'Cherkasy', 'Kharkiv'], "Dnipropetrovs'k": ['Lviv', 'Chernihiv', 'Poltava'], 'Kharkiv': ['Chernihiv', 'Sumy', 'Vinnytsia'], 'Khmelnytskyi': ['Uzhgorod', 'Odesa', 'Ternopil'], 'Kiev': ['Lviv', 'Poltava', 'Sumy', 'Zaporizhia'], 'Lviv': ['Odesa', 'Zhytomyr', 'Kiev', "Dnipropetrovs'k"], 'Odesa': ['Lviv', 'Uzhgorod', 'Cherkasy', 'Khmelnytskyi'], 'Poltava': ['Chernihiv', "Dnipropetrovs'k", 'Vinnytsia', 'Kiev'], 'Sumy': ['Kharkiv', 'Kiev', 'Zhytomyr'], 'Ternopil': ['Khmelnytskyi', 'Vinnytsia', 'Cherkasy'], 'Uzhgorod': ['Odesa', 'Vinnytsia', 'Khmelnytskyi', 'Zaporizhia'], 'Vinnytsia': ['Uzhgorod', 'Kharkiv', 'Poltava', 'Ternopil'], 'Zaporizhia': ['Uzhgorod', 'Kiev', 'Zhytomyr'], 'Zhytomyr': ['Lviv', 'Sumy', 'Cherkasy', 'Zaporizhia']}
MIT
a_star.ipynb
Google-Developer-Student-Club-KPI/a-star
Display the Graph
import networkx as nx import matplotlib.pyplot as plt Map = a_star.generate_map(low, high) graph = nx.Graph() graph.add_nodes_from(Map.keys()) for origin, destinations in Map.items(): graph.add_weighted_edges_from(([(origin, destination, weight) for destination, weight in zip(destinations, [distances[origin][dest] for dest in destinations])])) pos = nx.fruchterman_reingold_layout(graph, seed = 321) plt.figure(figsize = (30, 18)) nx.draw_networkx_nodes(graph, pos, node_color="yellow",label="blue", node_size = 1500) nx.draw_networkx_labels(graph, pos, font_color="blue") nx.draw_networkx_edges(graph, pos, edge_color='blue') nx.draw_networkx_edge_labels(graph,pos, edge_labels=nx.get_edge_attributes(graph,'weight'), font_color = "brown") plt.show() path, distance = a_star.run("Vinnytsia",'Poltava', Map) print("Solution:{}\nDistance:{}".format(",".join(path), distance))
Solution:Vinnytsia,Dnipropetrovs'k,Poltava Distance:705.274
MIT
a_star.ipynb
Google-Developer-Student-Club-KPI/a-star
Draw the solution
edges = [(path[i-1], path[i]) for i in range(1, len(path))] plt.figure(figsize = (30, 18)) nx.draw_networkx_nodes(graph, pos, node_color="yellow",label="blue", node_size = 1500) nx.draw_networkx_labels(graph, pos, font_color="blue") nx.draw_networkx_edges(graph, pos, edge_color='blue', arrows=False) nx.draw_networkx_edges(graph, pos, edgelist=edges ,edge_color='red', width = 5, alpha = 0.7, arrows=True) nx.draw_networkx_edge_labels(graph,pos, edge_labels=nx.get_edge_attributes(graph,'weight'), font_color = "brown") plt.show()
_____no_output_____
MIT
a_star.ipynb
Google-Developer-Student-Club-KPI/a-star
Check we get right proportion of NMACs
num_enc = 10000 num_nmac = 0 NMAC_R = 500 for _ in range(num_enc): tca = 50 st, int_act_gen = mc_encounter(tca) ac0, ac1, prev_a = st for _ in range(tca): ac0 = advance_ac(ac0, a_int('NOOP')) ac1 = advance_ac(ac1, next(int_act_gen)) obs = state_to_obs(State(ac0, ac1, a_int('NOOP'))) if obs.r <= NMAC_R: num_nmac += 1 num_nmac / num_enc avg_maneuver_len = 15 NUM_A = 3 p_self = (avg_maneuver_len - 1) / avg_maneuver_len p_trans = (1 - p_self) / (NUM_A - 1) p_t = ((p_self - p_trans) * np.identity(NUM_A) + p_trans * np.ones((NUM_A, NUM_A))) acts = [10, 10, 12, 123] g = action_generator(p_t, acts) 1 / NUM_A * np.ones((NUM_A, NUM_A))
_____no_output_____
MIT
notebooks/encounter_test.ipynb
osmanylc/deep-rl-collision-avoidance
MNIST handwritten digits classification with nearest neighbors In this notebook, we'll use [nearest-neighbor classifiers](http://scikit-learn.org/stable/modules/neighbors.htmlnearest-neighbors-classification) to classify MNIST digits using scikit-learn (version 0.20 or later required).First, the needed imports.
%matplotlib inline from pml_utils import get_mnist, show_failures import numpy as np from sklearn import neighbors, __version__ from sklearn.metrics import accuracy_score, confusion_matrix, classification_report import matplotlib.pyplot as plt import seaborn as sns sns.set() from distutils.version import LooseVersion as LV assert(LV(__version__) >= LV("0.20")), "Version >= 0.20 of sklearn is required."
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
Then we load the MNIST data. First time we need to download the data, which can take a while.
X_train, y_train, X_test, y_test = get_mnist('MNIST') print('MNIST data loaded: train:',len(X_train),'test:',len(X_test)) print('X_train:', X_train.shape) print('y_train:', y_train.shape) print('X_test', X_test.shape) print('y_test', y_test.shape)
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
The training data (`X_train`) is a matrix of size (60000, 784), i.e. it consists of 60000 digits expressed as 784 sized vectors (28x28 images flattened to 1D). `y_train` is a 60000-dimensional vector containing the correct classes ("0", "1", ..., "9") for each training digit.Let's take a closer look. Here are the first 10 training digits:
pltsize=1 plt.figure(figsize=(10*pltsize, pltsize)) for i in range(10): plt.subplot(1,10,i+1) plt.axis('off') plt.imshow(X_train[i,:].reshape(28, 28), cmap="gray") plt.title('Class: '+y_train[i])
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
1-NN classifier InitializationLet's create first a 1-NN classifier. Note that with nearest-neighbor classifiers there is no internal (parameterized) model and therefore no learning required. Instead, calling the `fit()` function simply stores the samples of the training data in a suitable data structure.
%%time n_neighbors = 1 clf_nn = neighbors.KNeighborsClassifier(n_neighbors) clf_nn.fit(X_train, y_train)
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
InferenceAnd try to classify some test samples with it.
%%time pred_nn = clf_nn.predict(X_test[:200,:])
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
We observe that the classifier is rather slow, and classifying the whole test set would take quite some time. What is the reason for this?The accuracy of the classifier:
print('Predicted', len(pred_nn), 'digits with accuracy:', accuracy_score(y_test[:len(pred_nn)], pred_nn))
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
Faster 1-NN classifier InitializationOne way to make our 1-NN classifier faster is to use less training data:
%%time n_neighbors = 1 n_data = 1024 clf_nn_fast = neighbors.KNeighborsClassifier(n_neighbors) clf_nn_fast.fit(X_train[:n_data,:], y_train[:n_data])
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
InferenceNow we can use the classifier created with reduced data to classify our whole test set in a reasonable amount of time.
%%time pred_nn_fast = clf_nn_fast.predict(X_test)
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
The classification accuracy is however now not as good:
print('Predicted', len(pred_nn_fast), 'digits with accuracy:', accuracy_score(y_test, pred_nn_fast))
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
Confusion matrixWe can compute the confusion matrix to see which digits get mixed the most:
labels=[str(i) for i in range(10)] print('Confusion matrix (rows: true classes; columns: predicted classes):'); print() cm=confusion_matrix(y_test, pred_nn_fast, labels=labels) print(cm); print()
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
Plotted as an image:
plt.matshow(cm, cmap=plt.cm.gray) plt.xticks(range(10)) plt.yticks(range(10)) plt.grid(None) plt.show()
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts