markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Log Conversion> Converts the event logs into csv format to make it easier to load them
%load_ext autoreload %autoreload 2 %matplotlib inline
The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload
Apache-2.0
01_log_conversion.ipynb
L0D3/P191919
Pyedra's TutorialThis tutorial is intended to serve as a guide for using Pyedra to analyze asteroid phase curve data. ImportsThe first thing we will do is import the necessary libraries. In general you will need the following:- `pyedra` (*pyedra*) is the library that we present in this tutorial.- `pandas` (*pandas*) this library will allow you to import your data as a dataframe._Note: In this tutorial we assume that you already have experience using these libraries._
import pyedra import pandas as pd
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Load the dataThe next thing we have to do is load our data. Pyedra should receive a dataframe with three columns: id (MPC number of the asteroid), alpha ($\alpha$, phase angle) and v (reduced magnitude in Johnson's V filter). You must respect the order and names of the columns as they have been mentioned. In this step we recommend the use of pandas:`df = pd.read_csv('somefile.csv')`For this tutorial we will use a preloaded data set offered by Pyedra.
df = pyedra.datasets.load_carbognani2019()
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Here we show you the structure that your data file should have. Note that the file can contain information about many asteroids, which allows to obtain catalogs of the parameters of the phase function for large databases.
df
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Fit your dataPyedra's main objective is to fit a phase function model to our data. Currently the api offers three different models:- `HG_fit` (H, G model): $V(\alpha)=H-2.5log_{10}[(1-G)\Phi_{1}(\alpha)+G\Phi_{2}(\alpha)]$- `Shev_fit` (Shevchenko model): $V(1,\alpha)=V(1,0)-\frac{a}{1+\alpha}+b\cdot\alpha$- `HG1G2_fit` (H, G$_1$, G$_2$ model): $V(\alpha) = H-2.5log_{10}[G_{1}\Phi_{1}(\alpha)+G_{2}\Phi_{2}(\alpha)+(1-G_{1}-G_{2})\Phi_{3}(\alpha)]$We will now explain how to apply each of them. At the end of this tutorial you will notice that they all work in an analogous way and that their implementation is very simple. HG_fitLet's assume that we want to fit the biparametric model H, G to our data set. What we will do is invoke Pyedra's `HG_fit` function:
HG = pyedra.HG_fit(df)
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
We have already created our catalog of H, G parameters for our data set. Let's see what it looks like.
HG
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
**R** is the coefficient of determination of the fit All pandas dataframe options are available. For example, you may be interested in knowing the mean H of your sample. To do so:
HG.H.mean()
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Remeber that `HG.H` selects the H column.
HG.H
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
The `PyedraFitDataFrame` can also be filtered, like a canonical pandas dataframe. Let's assume that we want to save the created catalog, but only for those asteroids whose id is less than t300. All we have to do is:
filtered = HG.model_df[HG.model_df['id'] < 300] filtered
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Finally we want to see our data plotted together with their respective fits. To do this we will use the `.plot` function provided by Pyedra. To obtain the graph with the adjustments of the phase function model we only have to pass to `.plot` the dataframe that contains our data in the following way:
HG.plot(df=df)
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
If your database is very large and you want a clearer graph, or if you only want to see the fit of one of the asteroids you can filter your initial dataframe.
asteroid_85 = df[df['id'] == 85] HG_85 = pyedra.HG_fit(asteroid_85) HG_85.plot(df = asteroid_85)
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
All pandas plots are available if you want to use any of them. For example, we may want to visualize the histogram of one of the parameters:
HG.plot(y='G', kind='hist')
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Or we may want to find out if there is a correlation between parameters:
HG.plot(x='G', y='H', kind='scatter', marker='o', color='black')
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Everything we have done in this section can be extended in an analogous way to the rest of the models, as we will see below. HG1G2_fitNow we want to adjust the H, G$_1$, G$_2$ model to our data. Use the function `HG1G2_fit` in the following way.
HG1G2 = pyedra.HG1G2_fit(df) HG1G2
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
**R** is the coefficient of determination of the fit. We can calculate, for example, the median of each of the columns:
HG1G2.median()
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Again, we can filter our catalog. We are keeping the best settings, that is, those whose R is greater than 0.98.
best_fits = HG1G2.model_df[HG1G2.model_df['R'] > 0.98] best_fits
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
We will now look at the graphics.
HG1G2.plot(df=df)
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
If we want to visualize the graph only of the asteroid (522):
asteroid_522 = df[df['id'] == 522] HG1G2_522 = pyedra.HG_fit(asteroid_522) HG1G2_522.plot(df=asteroid_522)
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
To see the correlation between the parameters G$_1$ and G$_2$ we can use the "scatter" graph of pandas:
HG1G2.plot(x='G1', y='G2', kind='scatter')
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Shev_fit If we want to adjust the Shevchenko model to our data, we must use `Shev_fit`.
Shev = pyedra.Shev_fit(df) Shev
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
**R** is the coefficient of determination of the fit. We can select a particular column and calculate, for example, its minimum:
Shev.V_lin Shev.V_lin.min()
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
And obviously we can graph the resulting fit:
Shev.plot(df=df)
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Selecting a subsample:
subsample = df[df['id'] > 100 ] Shev_subsample = pyedra.Shev_fit(subsample) Shev_subsample.plot(df=subsample)
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
We can use some of the pandas plot.
Shev_subsample.plot(y=['b', 'error_b'], kind='density', subplots=True, figsize=(5,5), xlim=(0,2))
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Gaia Data Below we show the procedure to combine some observation dataset with Gaia DR2 observations. We import the gaia data with `load_gaia()`
gaia = pyedra.datasets.load_gaia()
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
We then join both datasets (ours and gaia) with `merge_obs`
merge = pyedra.merge_obs(df, gaia) merge = merge[['id', 'alpha', 'v']]
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
We apply to the new dataframe all the functionalities as we did before
catalog = pyedra.HG_fit(merge) catalog.plot(df=merge)
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Shannon's Entropy of ABA features
import numpy as np import pandas as pd import matplotlib.pyplot as plt from skimage.measure import shannon_entropy from morphontogeny.functions.IO import reconstruct_ABA def level_arr(array, levels=256): arr = array - np.nanmin(array) # set the minimum to zero arr = (arr / np.nanmax(arr)) * (levels - 1) # and the max to the number of levels - 1 arr = np.round(arr) # round to ints. return arr # Loading features SFT_features = np.load('files/SFT_100features.npy') # List of Entropies SFT_shannon_entropies = [] # Indices file indices_file = 'files/mask_indices.npy' for ii in range(SFT_features.shape[1]): # Reconstructing features SFT_feature = reconstruct_ABA(SFT_features[:,ii], indices_file=indices_file, outside_value=np.nan, mirror=False) # Levelling the arrays SFT_feature = level_arr(SFT_feature) # Adding entropies SFT_shannon_entropies.append(shannon_entropy(SFT_feature))
_____no_output_____
MIT
Notebooks/02_analyses/Fig2_Shannon_Entropy.ipynb
BioProteanLabs/SFt_pipeline
The pandas library The [pandas library](https://pandas.pydata.org/) was created by [Wes McKinney](http://wesmckinney.com/) in 2010. pandas provides **data structures** and **functions** for manipulating, processing, cleaning and crunching data. In the Python ecosystem pandas is the state-of-the-art tool for working with tabular or spreadsheet-like data in which each column may be a different type (`string`, `numeric`, `date`, or otherwise). pandas provides sophisticated indexing functionality to make it easy to reshape, slice and dice, perform aggregations, and select subsets of data. pandas relies on other packages, such as [NumPy](http://www.numpy.org/) and [SciPy](https://scipy.org/scipylib/index.html). Further pandas integrates [matplotlib](https://matplotlib.org/) for plotting. If you are new to pandas we strongly recommend to visit the very well written [__pandas tutorials__](https://pandas.pydata.org/pandas-docs/stable/tutorials.html), which cover all relevant sections for new users to properly get started.Once installed (for details refer to the [documentation](https://pandas.pydata.org/pandas-docs/stable/install.html)), pandas is imported by using the canonical alias `pd`.
import pandas as pd import numpy as np
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
The pandas library has two workhorse data structures: __*Series*__ and __*DataFrame*__.* one dimensional `pd.Series` object* two dimensional `pd.DataFrame` object *** The `pd.Series` object Data generation
# import the random module from numpy from numpy import random # set seed for reproducibility random.seed(123) # generate 26 random integers between -10 and 10 my_data = random.randint(low=-10, high=10, size=26) # print the data my_data type(my_data)
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
A Series is a one-dimensional array-like object containing an array of data and an associated array of data labels, called its _index_. We create a `pd.Series` object by calling the `pd.Series()` function.
# Uncomment to look up the documentation # docstring #?pd.Series # source #??pd.Series # create a pd.Series object s = pd.Series(data=my_data) s type(s)
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** `pd.Series` attributesPython objects in general and the `pd.Series` in particular offer useful object-specific *attributes*.* _attribute_ $\to$ `OBJECT.attribute` $\qquad$ _Note that the attribute is called without parenthesis_
s.dtypes s.index
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
We can use the `index` attribute to assign an index to a `pd.Series` object.Consider the letters of the alphabet....
import string letters = string.ascii_uppercase letters
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
By providing an array-type object we assign a new index to the `pd.Series` object.
s.index = list(letters) s.index s
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** `pd.Series` methodsMethods are functions that are called using the attribute notation. Hence they are called by appending a dot (`.`) to the Python object, followed by the name of the method, parentheses `()` and in case one or more arguments (`arg`). * _method_ $\to$ `OBJECT.method_name(arg1, arg2, ...)`
s.sum() s.mean() s.max() s.min() s.median() s.quantile(q=0.5) s.quantile(q=[0.25, 0.5, 0.75])
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** Element-wise arithmeticA very useful feature of `pd.Series` objects is that we may apply arithmetic operations *element-wise*.
s+10 #s*0.1 #10/s #s**2 #(2+s)*1**3 #s+s
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** Selection and IndexingAnother main data operation is indexing and selecting particular subsets of the data object. pandas comes with a very [rich set of methods](https://pandas.pydata.org/pandas-docs/stable/indexing.html) for these type of tasks. In its simplest form we index a Series numpy-like, by using the `[]` operator to select a particular `index` of the Series.
s s[3] s[2:6] s["C"] s["C":"K"]
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** The `pd.DataFrame` objectThe primary pandas data structure is the `DataFrame`. It is a two-dimensional size-mutable, potentially heterogeneous tabular data structure with both row and column labels. Arithmetic operations align on both row and column labels. Basically, the `DataFrame` can be thought of as a `dictionary`-like container for Series objects. **Generate a `DataFrame` object from scratch** pandas facilitates the import of different data types and sources, however, for the sake of this tutorial we generate a `DataFrame` object from scratch. Source: http://duelingdata.blogspot.de/2016/01/the-beatles.html
df = pd.DataFrame({"id" : range(1,5), "Name" : ["John", "Paul", "George", "Ringo"], "Last Name" : ["Lennon", "McCartney", "Harrison", "Star"], "dead" : [True, False, True, False], "year_born" : [1940, 1942, 1943, 1940], "no_of_songs" : [62, 58, 24, 3] }) df
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** `pd.DataFrame` attributes
df.dtypes # axis 0 df.columns # axis 1 df.index
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** `pd.DataFrame` methods **Get a quick overview of the data set**
df.info() df.describe() df.describe(include="all")
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
**Change index to the variable `id`**
df df.set_index("id") df
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
Note that nothing changed!!For the purpose of memory and computation efficiency `pandas` returns a view of the object, rather than a copy. Hence, if we want to make a permanent change we have to assign/reassign the object to a variable: df = df.set_index("id") or, some methods have the `inplace=True` argument: df.set_index("id", inplace=True)
df = df.set_index("id") df
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
**Arithmetic methods**
df df.sum(axis=0) df.sum(axis=1)
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
`groupby` method[Hadley Wickham 2011: The Split-Apply-Combine Strategy for Data Analysis, Journal of Statistical Software, 40(1)](https://www.jstatsoft.org/article/view/v040i01) Image source: [Jake VanderPlas 2016, Data Science Handbook](https://jakevdp.github.io/PythonDataScienceHandbook/)
df df.groupby("dead") df.groupby("dead").sum() df.groupby("dead")["no_of_songs"].sum() df.groupby("dead")["no_of_songs"].mean() df.groupby("dead")["no_of_songs"].agg(["mean", "max", "min", "sum"])
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
Family of `apply`/`map` methods* `apply` works on a row (`axis=0`, default) / column (`axis=1`) basis of a `DataFrame`* `applymap` works __element-wise__ on a `DataFrame`* `map` works __element-wise__ on a `Series`.
df # (axis=0, default) df[["Last Name", "Name"]].apply(lambda x: x.sum()) # (axis=1) df[["Last Name", "Name"]].apply(lambda x: x.sum(), axis=1)
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
_... maybe a more useful case ..._
df.apply(lambda x: " ".join(x[["Name", "Last Name"]]), axis=1)
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** Selection and Indexing **Column index**
df["Name"] df[["Name", "Last Name"]] df.dead
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
**Row index**In addition to the `[]` operator pandas ships with other indexing operators such as `.loc[]` and `.iloc[]`, among others.* `.loc[]` is primarily __label based__, but may also be used with a boolean array.* `.iloc[]` is primarily __integer position based__ (from 0 to length-1 of the axis), but may also be used with a boolean array.
df.head(2) df.loc[1] df.iloc[1]
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
**Row and Columns indices**`df.loc[row, col]`
df.loc[1, "Last Name"] df.loc[2:4, ["Name", "dead"]]
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
**Logical indexing**
df df["no_of_songs"] > 50 df.loc[df["no_of_songs"] > 50] df.loc[(df["no_of_songs"] > 50) & (df["year_born"] >= 1942)] df.loc[(df["no_of_songs"] > 50) & (df["year_born"] >= 1942), ["Last Name", "Name"]]
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** Manipulating columns, rows and particular entries **Add a row to the data set**
from numpy import nan df.loc[5] = ["Mickey", "Mouse", nan, 1928, nan] df df.dtypes
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
_Note that the variable `dead` changed. Its values changed from `True`/`False` to `1.0`/`0.0`. Consequently its `dtype` changed from `bool` to `float64`._ **Add a column to the data set**
pd.datetime.today() now = pd.datetime.today().year now df["age"] = now - df.year_born df
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
**Change a particular entry**
df.loc[5, "Name"] = "Minnie" df
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** PlottingThe plotting functionality in pandas is built on top of matplotlib. It is quite convenient to start the visualization process with basic pandas plotting and to switch to matplotlib to customize the pandas visualization. `plot` method
# this call causes the figures to be plotted below the code cells %matplotlib inline df df[["no_of_songs", "age"]].plot() df["dead"].plot.hist() df["age"].plot.bar()
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
...some notes on plotting with PythonPlotting is an essential component of data analysis. However, the Python visualization world can be a frustrating place. There are many different options and choosing the right one is a challenge. (If you dare take a look at the [Python Visualization Landscape](https://github.com/rougier/python-visualization-landscape).)[matplotlib](https://matplotlib.org/) is probably the most well known 2D plotting Python library. It allows to produce publication quality figures in a variety of formats and interactive environments across platforms. However, matplotlib is the cause of frustration due to the complex syntax and due to existence of two interfaces, a __MATLAB like state-based interface__ and an __object-oriented interface__. Hence, __there is always more than one way to build a visualization__. Another source of confusion is that matplotlib is well integrated into other Python libraries, such as [pandas](http://pandas.pydata.org/index.html), [seaborn](http://seaborn.pydata.org/index.html), [xarray](http://xarray.pydata.org/en/stable/), among others. Hence, there is confusion as when to use pure matplotlib or a tool that is built on top of matplotlib. We import the `matplotlib` library and matplotlib's `pyplot` module using the canonical commands import matplotlib as mpl import matplotlib.pyplot as pltWith respect to matplotlib terminology it is important to understand that the __`Figure`__ is the final image that may contain one or more axes, and that the __`Axes`__ represents an individual plot.To create a `Figure` object we call fig = plt.figure()However, a more convenient way to create a `Figure` object and an `Axes` object at once, is to call fig, ax = plt.subplots() Then we can use the `Axes` object to add data for ploting.
import matplotlib.pyplot as plt # create a Figure and Axes object fig, ax = plt.subplots(figsize=(10,5)) # plot the data and reference the Axes object df["age"].plot.bar(ax=ax) # add some customization to the Axes object ax.set_xticklabels(df["Name"], rotation=0) ax.set_xlabel("") ax.set_ylabel("Age", size=14) ax.set_title("The Beatles and ... something else", size=18);
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
Методы внутренней точки На прошлом семинаре- Задачи оптимизации с ограничениями на простые множества- Метод проекции градиента как частный случай проксимального градиентного метода- Метод условного градента (Франка-Вольфа) и его сходимость Задача выпуклой оптимизации с ограничениями типа равенств\begin{equation*}\begin{split}&\min f(x) \\ \text{s.t. } & Ax = b,\end{split}\end{equation*}где $f$ - выпукла и дважды дифференцируема, $A \in \mathbb{R}^{p \times n}$ и $\mathrm{rank} \; A = p < n$ Двойственная задачаДвойственная функция \begin{equation*}\begin{split}g(\mu) & = -b^{\top}\mu + \inf_x(f(x) + \mu^{\top}Ax) \\& = -b^{\top}\mu - \sup_x((-A^{\top}\mu)^{\top}x -f(x)) \\& = -b^{\top}\mu - f^*(-A^{\top}\mu)\end{split}\end{equation*}Двойственная задача$$\max_\mu -b^{\top}\mu - f^*(-A^{\top}\mu)$$**Подход 1**: найти сопряжённую функцию и решить безусловную задачу оптимизации **Трудности**- не всегда легко восстановить решение прямой задачи по решению двойственной- сопряжённая функция $f^*$ должна быть дважды дифференцируемое для быстрого решения двойственной задачи. Это не всегда так. Условия оптимальности- $Ax^* = b$- $f'(x^*) + A^{\top}\mu^* = 0$или$$ \begin{bmatrix} f' & A^{\top} \\ A & 0 \end{bmatrix} \begin{bmatrix} x^{\\*} \\ \mu^{\\*} \end{bmatrix} = \begin{bmatrix} 0 \\ b \end{bmatrix} $$**Подход 2**: решить нелинейную в общем случае систему методом Ньютона.**Вопрос**: в каком случае система окажется линейной? Метод Ньютона для выпуклых задач с ограничениями типа равенств\begin{equation*}\begin{split}& \min_v f(x) + f'(x)^{\top}v + \frac{1}{2}v^{\top}f''(x)v\\\text{s.t. } & A(x + v) = b\end{split}\end{equation*}Из условий оптимальности имеем$$ \begin{bmatrix} f''(x) & A^{\top} \\ A & 0 \end{bmatrix} \begin{bmatrix} v \\ w \end{bmatrix} = \begin{bmatrix} -f'(x) \\ 0 \end{bmatrix} $$**Шаг метода Ньютона определён только для невырожденной матрицы!** **Упражнение**. Посчитайте за сколько итераций метод Ньютона сойдётся для квадратичной функции с ограничениями типа равенств. Линеаризация условий оптимальности- $A(x + v) = b \rightarrow Av = 0$- $f'(x + v) + A^{\top}w \approx f'(x) + f''(x)v + A^{\top}w = 0$или- $f''(x)v + A^{\top}w = -f'(x)$ Псевдокод**Важно:** начальная точка должна лежать в допустимом множестве!```pythondef NewtonEqualityFeasible(f, gradf, hessf, A, b, stop_crit, line_search, x0, tol): x = x0 n = x.shape[0] while True: newton_matrix = [[hessf(x), A.T], [A, 0]] rhs = [-gradf(x), 0] w = solve_lin_sys(newton_matrix, rhs) h = w[:n] if stop_crit(x, h, gradf(x), **kwargs) < tol: break alpha = line_search(x, h, f, gradf(x), **kwargs) x = x + alpha * h return x``` Критерий остановкиПолучим выражение для значения$$f(x) - \inf_v(\hat{f}(x + v) \; | \; A(x+v) = b),$$где $\hat{f}$ - квадратичная аппроксимация функции $f$.Для этого $$\langle h^{\top} \rvert \cdot \quad f''(x)h + A^{\top}w = -f'(x)$$с учётом $Ah = 0$ получаем $$h^{\top}f''(x)h = -f'(x)^{\top}h$$Тогда $$\inf_v(\hat{f}(x + v) \; | \; A(x+v) = b) = f(x) - \frac{1}{2}h^{\top}f''(x)h$$**Вывод:** величина $h^{\top}f''(x)h$ является наиболее адекватным критерием остановки метода Ньютона. Теорема сходимостиСходимость метода аналогична сходимости метода Ньютона для задачи безусловной оптимизации.**Теорема**Пусть выполнены следующие условия- множество уровней $S = \{ x \; | \; x \in D(f), \; f(x) \leq f(x_0), \; Ax = b \}$ замкнуто и $x_0 \in D(f), \; Ax_0 = b$- для любых $x \in S$ и $\tilde{x} \in S$ гессиан $f''(x)$ липшицев- на множестве $S$ $\|f''(x)\|_2 \leq M $ и норма обратной матрицы KKT системы ограничена сверхуТогда, метод Ньютона сходится к паре $(x^*, \mu^*)$ линейно, а при достижении достаточной близости к решению - квадратично. Случай недопустимой начальной точки- Метод Ньютона требует чтобы начальная точка лежала в допустимом множестве- Что делать, если поиск такой точки неочевиден: например, если область определения $f$ не сопадает с $\mathbb{R}^n$- Пусть начальная точка не является допустимой, в этом случае условия KKT можно записать так$$\begin{bmatrix}f''(x) & A^{\top}\\A & 0\end{bmatrix}\begin{bmatrix}v\\w\end{bmatrix} = -\begin{bmatrix}f'(x)\\{\color{red}{Ax - b}}\end{bmatrix}$$- Если $x$ допустима, то система совпадает с системой для обычного метода Ньютона Прямо-двойственная интерпретация- Метод *прямо-двойственный*, если на каждой итерации обновляются прямые и двойственные переменные- Покажем, что это значит. Для этого запишем условия оптимальности в виде$$r(x^*, \mu^*) = (r_d(x^*, \mu^*), r_p(x^*, \mu^*)) = 0,$$где $r_p(x, \mu) = Ax - b$ и $r_d(x, \mu) = f'(x) + A^{\top}\mu$- Решим систему методом Ньютона:$$r(y + z) \approx r(y) + Dr(y)z = 0$$ - Прямо-двойственный шаг в методе Ньютона определим как решение линейной системы$$Dr(y)z = -r(y)$$или более подробно$$\begin{bmatrix}f''(x) & A^{\top}\\A & 0\end{bmatrix}\begin{bmatrix}z_p\\z_d\end{bmatrix} = -\begin{bmatrix}r_d(x, \mu)\\r_p(x, \mu)\end{bmatrix}= - \begin{bmatrix}f'(x) + A^{\top}\mu\\Ax - b\end{bmatrix}$$- Заменим $z_d^+ = \mu + z_d$ и получим$$\begin{bmatrix}f''(x) & A^{\top}\\A & 0\end{bmatrix}\begin{bmatrix}z_p\\z_d^+\end{bmatrix}= - \begin{bmatrix}f'(x)\\Ax - b\end{bmatrix}$$- Система полностью эквивалентна ранее полученной в обозначениях $$v = z_p \qquad w = z_d^+ = \mu + z_d $$- Метод Ньютона даёт шаг для прямой переменной и обновлённое значение для двойственной Способ инициализации- Удобный способ задания начального приближения: найти точку из области определения $f$ гораздо проще, чем из пересечения области определения и допустимого множества- Метод Ньютона с недопустимой начальной точкой не может определить согласованность ограничений Псевдокод```pythondef NewtonEqualityInfeasible(f, gradf, hessf, A, b, stop_crit, line_search, x0, mu0, tol): x = x0 mu = mu0 n = x.shape[0] while True: z_p, z_d = ComputeNewtonStep(hessf(x), A, b) if stop_crit(x, z_p, z_d, gradf(x), **kwargs) < tol: break alpha = line_search(x, z_p, z_d, f, gradf(x), **kwargs) x = x + alpha * z_p mu = z_d return x``` Критерий остановки и линейный поиск- Изменение $r_p$ после шага $z_p$$$A(x + \alpha z_p) - b = [A(x + z_p) = b] = Ax + \alpha(b - Ax) - b = (1 - \alpha)(Ax - b)$$- Итоговое изменение после $k$ шагов$$r^{(k)} = \prod_{i=0}^{k-1}(1 - \alpha^{(i)})r^{(0)}$$- Критерий остановки: $Ax = b$ и $\|r(x, \mu)\|_2 \leq \varepsilon$ - Линейный поиск: $c \in (0, 1/2)$, $\beta = (0, 1)$```pythondef linesearch(r, x, mu, z_p, z_d, c, beta): alpha = 1 while np.linalg.norm(r(x + alpha * z_p, mu + alpha * z_d)) >= (1 - c * alpha) * np.linalg.norm(r(x, mu)): alpha *= beta return alpha``` Теорема сходимостиРезультат аналогичен результаты для допустимой начальной точки**Теорема.** Пусть- множество подуровней $S = \{(x, \mu) \; | \; x \in D(f), \; \| r(x, \mu) \|_2 \leq \| r(x_0, \mu_0) \|_2 \}$ замкнуто- на множестве $S$ норма матрицы обратной к ККТ матрице ограничена- гессиан липшицев на $S$.Тогда сходимость метода линейна при удалении от решении и квадратичная при достаточном приближении к решению. Общая задача выпуклой оптимизации\begin{equation*}\begin{split}& \min_{x \in \mathbb{R}^n} f_0(x)\\\text{s.t. } & f_i (x) \leq 0 \qquad i=1,\ldots,m\\& Ax = b,\end{split}\end{equation*}где $f_i$ - выпуклые и дважды непрерывно дифференцируемы, $A \in \mathbb{R}^{p \times n}$ и $\mathrm{rank} \; A = p < n$. Предполагаем, что задача строго разрешима, то есть выполняется условие Слейтера. Условия оптимальности- Разрешимость прямой задачи$$Ax^* = b, \; f_i(x^*) \leq 0, \; i = 1,\ldots,m$$- Разрешимость двойственной задачи$$\lambda^* \geq 0$$- Стационарность лагранжиана$$f'_0(x^*) + \sum_{i=1}^m \lambda^*_if'_i(x^*) + A^{\top}\mu^* = 0$$- Условие дополняющей нежёсткости$$\lambda^*_i f_i(x^*) = 0, \qquad i = 1,\ldots, m$$ Идея- Свести задачу с ограничениями типа **неравенств** к последовательности задач с ограничениями типа **равенств**- Использовать методы для решения задачи с ограничениями типа равенств \begin{equation*}\begin{split}& \min f_0(x) + \sum_{i=1}^m I_-(f_i(x))\\\text{s.t. } & Ax = b,\end{split}\end{equation*}где $I_-$ - индикаторная функция$$I_-(u) = \begin{cases}0, & u \leq 0\\\infty, & u > 0\end{cases}$$**Проблема.** Теперь целевая функция - **недифференцируема**. Логарифмический барьер**Идея.** Приблизить функцию $I_-(u)$ функцией$$\hat{I}_-(u) = -t\log(-u),$$где $t > 0$ - параметр.- Функции $I_-(u)$ и $\hat{I}_-(u)$ выпуклые и неубывающие- Однако $\hat{I}_-(u)$ **дифференцируема** и приближается к $I_-(u)$ при $t \to 0$
%matplotlib inline import matplotlib.pyplot as plt import numpy as np x = np.linspace(-2, 0, 100000, endpoint=False) plt.figure(figsize=(10, 6)) for t in [0.1, 0.5, 1, 1.5, 2]: plt.plot(x, -t * np.log(-x), label=r"$t = " + str(t) + "$") plt.legend(fontsize=20) plt.xticks(fontsize=20) plt.yticks(fontsize=20) plt.xlabel("u", fontsize=20)
_____no_output_____
MIT
Spring2021/int_point.ipynb
Jhomanik/MIPT-Opt
"Ограниченная" задача\begin{equation*}\begin{split}& \min f_0(x) + \sum_{i=1}^m -t \log(-f_i(x))\\\text{s.t. } & Ax = b\end{split}\end{equation*}- Задача по-прежнему **выпуклая**- Функция $$\phi(x) = -\sum\limits_{i=1}^m \log(-f_i(x))$$ называется *логарифмическим барьером*. Её область определения - множество точек, для котороых ограничения типа неравенств выполняются строго.**Упражнение.** Найдите градиент и гессиан $\phi(x)$ Центральный путьДля каждого $t > 0$ "ограниченная" задача имеет единственное решение $x^*(t)$.**Определение.** Последовательность $x^*(t)$ для $t > 0$ образует *центральный путь*. Условия оптимальности для "ограниченной" задачи- Разрешимость прямой задачи$$Ax^*(t) = b, \; f_i(x^*) < 0, \; i = 1,\ldots,m$$- Стационарность лагранжиана\begin{equation*}\begin{split}& f'_0(x^*(t)) + \phi'(x^*(t)) + A^{\top}\hat{\mu} = \\& = f'_0(x^*(t)) - t\sum_{i=1}^m \frac{f_i'(x^*(t))}{f_i(x^*(t))} + A^{\top}\hat{\mu} = 0\end{split}\end{equation*} - Обозначим $$\lambda^*_i(t) = -\frac{t}{f_i(x^*(t))} \; i=1,\ldots,m \text{ и } \mu^* = \hat{\mu}$$- Тогда условие оптимальности можно записать как$$f'_0(x^*(t)) + \sum_{i=1}^m \lambda^*_i(t)f_i'(x^*(t)) + A^{\top}\mu^* = 0$$- Тогда $x^*(t)$ минимизирует лагранжиан $$L = f_0(x) + \sum_{i=1}^m \lambda_if_i(x) + \mu^{\top}(Ax - b)$$для $\lambda = \lambda^*(t)$ и $\mu = \mu^*$. Зазор двойственности- Двойственная функция $g(\lambda^*(t), \mu^*)$ конечна и представима в виде\begin{equation*}\begin{split}g(\lambda^*(t), \mu^*) & = f_0(x^*(t)) + \sum_{i=1}^m \lambda^*_i(t)f_i(x^*(t)) + (\mu^*)^{\top}(Ax^*(t) - b)\\& = f_0(x^*(t)) - mt\end{split}\end{equation*}- Зазор двойственности$$f_0(x^*(t)) - p^* \leq mt$$- При $t \to 0$ зазор двойственности равен 0 и центральный путь сходится к решению исходной задачи. ККТ интерпретацияУсловия оптимальности для "ограниченной" задачи эквивалентны условиям оптимальности для исходной задачи если$$-\lambda_i f_i(x) = 0 \to - \lambda_i f_i(x) = t \quad i = 1,\ldots, m$$ Физическая интерпретация- Предположим, что ограничений типа равенства нет- Рассмотрим неквантовую частицу в поле сил- Каждому ограничению $f_i(x) \leq 0$ поставим в соответствие силу$$F_i(x) = -\nabla(-\log(-f_i(x))) = \frac{f'_i(x)}{f_i(x)}$$- Целевой функции также поставим в соответствие силу $$F_0(x) = -\frac{f'_0(x)}{t}$$- Каждая точка из центрального пути $x^*(t)$ - это положение частицы, в котором выполняется баланс сил ограничений и целевой функции- С уменьшением $t$ сила для целевой функции доминирует, и частица стремится занять положение, расположенное ближе к оптимальному- Поскольку сила ограничений стремится к бесконечности при приближении частицы к границе, частица никогда не вылетит из допустимого множества Барьерный метод- $x_0$ должна быть допустимой- $t_0 > 0$ - начальное значение параметра- $\alpha \in (0, 1)$ - множитель для уменьшения $t_0$```pythondef BarrierMethod(f, x0, t0, tol, alpha, **kwargs): x = x0 t = t0 while True: x = SolveBarrierProblem(f, t, x, **kwargs) if m * t < tol: break t *= alpha return x``` Точность решения "ограниченной" задачи- Точное решение "ограниченной" задачи не требуется, так как приближённый центральный путь всё равно сойдётся к решению исходной задачи- Двойственные переменные перестают быть двойственными при неточном решении, но это поправимо введением поправочных слагаемых- Разница в стоимости точного и неточного центрального пути - несколько итераций метода Ньютона, поэтому существенного ускорения добиться нельзя Выбор параметров- Множитель $\alpha$ - При $\alpha \sim 1$, **мало** итераций нужно для решения "ограниченной" задачи, но **много** для нахождения точного решения исходной задачи - При $\alpha \sim 10^{-5}$ **много** итераций нужно для решения "ограниченной" задачи, но **мало** для нахождения точного решения исходной задачи- Начальный параметр $t_0$ - Аналогичная альтернатива как и для параметра $\alpha$ - Параметр $t_0$ задаёт начальную точку для центрального пути Почти теорема сходимости- Как было показано выше при $t \to 0$ барьерный метод сходится к решению исходной задачи- Скорость сходимости напрямую связана с параметрами $\alpha$ и $t_0$, как показано ранее- Основная сложность - быстрое решение вспомогательных задач методом Ньютона Задача поиска допустимого начального приближения- Барьерный метод требует допустимого начального приближения- Метод разбивается на две фазы - Первая фаза метода ищет допустимое начальное приближение - Вторая фаза использует найденное начальное приближение для запуска барьерного метода Первая фаза методаПростой поиск допустимой точки\begin{equation*}\begin{split}& \min s\\\text{s.t. } & f_i(x) \leq s\\& Ax = b\end{split}\end{equation*}- эта задача всегда имеет строго допустимое начальное приближение- если $s^* < 0$, то $x^*$ строго допустима и может быть использована в барьерном методе- если $s^* > 0$, то задача не разрешима и допустимое множество пусто Сумма несогласованностей\begin{equation*}\begin{split}& \min s_1 + \ldots + s_m\\\text{s.t. } & f_i(x) \leq s_i\\& Ax = b\\& s \geq 0\end{split}\end{equation*}- оптимальное значене равно нулю и достигается тогда и только тогда когда система ограничений совместна- если задача неразрешима, то можно определить какие ограничения к этому приводят, то есть какие $s_i > 0$ Вторая фаза метода- После получения допустимой начальной точки $x_0$ выполняется обычный метод Ньютона для задачи с ограничениями равенствами Прямо-двойственный методПохож на барьерный метод, но- нет разделения на внешние итерации и внутренние: на каждой итерации обновляются прямые и двойственные переменные- направление определяется методом Ньютона, применённого к модифицированной системе ККТ- последовательность точек в прямо-двойственном методе не обязательно допустимы - работает даже когда задача не строго допустима Сходимость для квадратичной целевой функцииПри некоторых предположениях о начальной точке и начальном значении $\mu$, можно показать, что для достижения $\mu_k \leq \varepsilon$ потребуется $$\mathcal{O}\left(\sqrt{n}\log \left( \frac{1}{\varepsilon}\right)\right)$$ итерацийДоказательство и все детали можно найти [тут](https://epubs.siam.org/doi/book/10.1137/1.9781611971453?mobileUi=0) или [тут](https://www.maths.ed.ac.uk/~gondzio/reports/ipmXXV.pdf)- Сравните с методами типа градиентного спуска, которые дают сходимость типа $\mathcal{O}\left( \frac{1}{\varepsilon} \right)$- Зависит от размерности как $\sqrt{n}$- На практике зависимость от размерности ещё слабее Резюме- Метод Ньютона для выпуклой задачи с оганичениями типа равенств- Случай недопустимой начальной точки- Прямой барьерный метод- Прямо-двойственный метод Применение методов внутренней точки к задаче линейного программированияИсходная задача\begin{align*}&\min_x c^{\top}x \\\text{s.t. } & Ax = b\\& x_i \geq 0, \; i = 1,\dots, n\end{align*}Аппроксимированная задача\begin{align*}&\min_x c^{\top}x {\color{red}{- \mu \sum\limits_{i=1}^n \ln x_i}} \\\text{s.t. } & Ax = b\\\end{align*}для некоторого $\mu > 0$ Барьерная функция**Определение.** Функция $B(x, \mu) = -\mu\ln x$ называется *барьерной* для задачи с ограничением $x \geq 0$.Более подробно о таких функциях будет рассказано в контексте нелинейной условной оптимизации... Что произошло?- Сделали из линейной задачу нелинейную- Перенесли ограничение типа неравенства в целевую функцию- Ввели дополнительный параметр $\mu$ Почему это хорошо?Переход к задаче с ограничениями типа равенств $\to$ упрощение условий оптимальности, в частности- Исключено требование дополняющей нежёсткости- Исключено условие неотрицательности множителя Лагранжа для ограничения типа неравенства Условия оптимальности- Лагранжиан: $L = c^{\top}x - \mu\sum\limits_{i=1}^n \ln x_i + \lambda^{\top}(Ax - b)$- Стационарная точка $L$: $$c - \mu X^{-1}e + A^{\top}\lambda = 0,$$где $X = \mathrm{diag}(x_1, \dots, x_n)$ и $e = [1, \dots, 1]$- Ограничение типа равенства: $Ax = b$ Пусть $s = \mu X^{-1}e$, тогда условия оптимальности можно переписать так:- $A^{\top}\lambda + c - s = 0 $- $Xs = {\color{red}{\mu e}}$- $Ax = b$Также $x > 0 \Rightarrow s > 0$ Сравнение с условиями оптимальности для исходной задачи- Лагранжиан: $L = c^{\top}x + \lambda^{\top}(Ax - b) - s^{\top}x$- Условие стационарности: $c + A^{\top}\lambda - s = 0$- Допустимость прямой задачи: $Ax = b, \; x \geq 0$- Допустимость двойственной: $s \geq 0$- Условие дополняющей нежёсткости: $s_ix_i = 0$ После упрощения- $A^{\top}\lambda + c - s = 0$- $Ax = b$- $Xs = {\color{red}{0}}$- $x \geq 0, \; s \geq 0$ Вывод- Введение барьерной функции c множителем $\mu$ эквивалентно релаксации условий дополняющей нежёсткости на параметр $\mu$- При $\mu \to 0$ решения задач совпадают!- Итеративное решение задачи с барьерной функцией вместе с уменьшением $\mu$. Последовательность решений сойдётся к вершине симплекса по траектории из точек, лежащих внутри симплекса. Общая схема```pythondef GeneralInteriorPointLP(c, A, b, x0, mu0, rho, tol): x = x0 mu = mu0 e = np.ones(c.shape[0]) while True: primal_var, dual_var = StepInsideFeasibleSet(c, A, b, x, mu) mu *= rho if converge(primal_var, dual_var, c, A, b, tol) and mu < tol: break return x``` Как решать задачу с барьерной функцией?- Прямой метод - Прямо-двойственный метод Прямой методВспомним исходную задачу:\begin{align*}&\min_x c^{\top}x - \mu \sum\limits_{i=1}^n \ln x_i \\\text{s.t. } & Ax = b\\\end{align*}Идея: приблизим целевую функцию до второго порядка, как в методе Ньютона. РеализацияНа $(k+1)$-ой итерации необходимо решить следующую задачу: \begin{align*}&\min_p \frac{1}{2}p^{\top}Hp + g^{\top}p\\\text{s.t. } & A(x_k + p) = b,\\\end{align*}где $H = \mu X^{-2}$ - гессиан, и $g = c - \mu X^{-1}e$ - градиент. Снова KKTВыпишем условия ККТ для этой задачи- $Hp + g + A^{\top}\lambda = 0$- $Ap = 0$или$$\begin{bmatrix} H & A^{\top}\\ A & 0 \end{bmatrix} \begin{bmatrix} p\\ \lambda \end{bmatrix} = \begin{bmatrix} -g \\ 0 \end{bmatrix}$$ Из первой строки:$$-\mu X^{-2}p + A^{\top}\lambda = c - \mu X^{-1}e$$$$-\mu Ap + AX^{2}A^{\top}\lambda = AX^2c - \mu AXe$$$$AX^{2}A^{\top}\lambda = AX^2c - \mu AXe$$Так как $X \in \mathbb{S}^n_{++}$ и $A$ полного ранга, то уравнение имеет единственное решение $\lambda^*$. Найдём направление $p$$$-\mu p + X^2A^{\top}\lambda^* = X^2c - \mu Xe = X^2c - \mu x$$$$p = x + \frac{1}{\mu}X^2(A^{\top}\lambda^* - c)$$ Способы решения системы из ККТ1. Прямой способ: формирование матрицы $(n + m) \times (n + m)$ и явное решение системы - $\frac{1}{3}(n + m)^3$2. Последовательное исключение переменных: - $Hp + A^{\top}\lambda = -g$, $p = -H^{-1}(g + A^{\top}\lambda)$ - $Ap = -AH^{-1}(g + A^{\top}\lambda) = -AH^{-1}A^{\top}\lambda - AH^{-1}g = 0$ Здесь матрица $-AH^{-1}A^{\top}$ есть *дополнение по Шуру* матрицы $H$.3. Алгоритм вычисления решения при последовательном исключении переменных - Вычислить $H^{-1}g$ и $H^{-1}A^{\top}$ - $f_H + (m+1)s_H$ операций - Вычислить дополнение по Шуру $-AH^{-1}A^{\top}$ - $\mathcal{O}(m^2n)$ - Найти $\lambda$ - $\frac{1}{3}m^3$ операций - Найти $p$ - $s_H + \mathcal{O}(mn)$ операций4. Итого: $f_H + ms_H + \frac{m^3}{3} + \mathcal{O}(m^2n)$ уже гораздо быстрее прямого способа Используем структуру матрицы $H$- В нашем случае $H = \mu X^{-2}$ - диагональная матрица!- $f_H$ - $n$ операций- $s_H$ - $n$ операций- Итоговая сложность $\frac{m^3}{3} + \mathcal{O}(m^2n)$ операций, где $m \ll n$ Поиск шага $\alpha$- Обычный линейный поиск с условиями достаточного убывания- Условие $A(x_k + \alpha p) = b$ выполняется автоматически Псевдокод прямого барьерного метода```pythondef PrimalBarrierLP(c, A, b, x0, mu0, rho, tol): x = x0 mu = mu0 e = np.ones(x.shape[0]) while True: p, lam = ComputeNewtonDirection(c, x, A, mu) alpha = line_search(p, mu, c, x) x = x + alpha * p mu = rho * mu if mu < tol and np.linalg.norm(x.dot(c - A.T.dot(lam)) - mu * e) < tol: break return x``` Сравнение барьерного метода и прямого метода внутренней точки- Пример Klee-Minty c прошлого семинара\begin{align*}& \max_{x \in \mathbb{R}^n} 2^{n-1}x_1 + 2^{n-2}x_2 + \dots + 2x_{n-1} + x_n\\\text{s.t. } & x_1 \leq 5\\& 4x_1 + x_2 \leq 25\\& 8x_1 + 4x_2 + x_3 \leq 125\\& \ldots\\& 2^n x_1 + 2^{n-1}x_2 + 2^{n-2}x_3 + \ldots + x_n \leq 5^n\\& x \geq 0\end{align*}- Какая сложность работы симплекс метода? - Сведение к стандартной форме\begin{align*}& \min_{x, \; z} -c^{\top}x \\\text{s.t. } & Ax + z = b\\& x \geq 0, \quad z \geq 0\end{align*}- Сравним скорость работы прямого барьерного метода и симплекс-метода
import numpy as np %matplotlib inline import matplotlib.pyplot as plt import scipy.optimize as scopt import scipy.linalg as sclin def NewtonLinConstraintsFeasible(f, gradf, hessf, A, x0, line_search, linsys_solver, args=(), disp=False, disp_conv=False, callback=None, tol=1e-6, max_iter=100, **kwargs): x = x0.copy() n = x0.shape[0] iteration = 0 lam = np.random.randn(A.shape[0]) while True: gradient, hess = gradf(x, *args), hessf(x, *args) h = linsys_solver(hess, A, gradient) descent_dir = h[:n] decrement = descent_dir.dot(hessf(x, *args).dot(descent_dir)) if decrement < tol: if disp_conv: print("Tolerance achieved! Decrement = {}".format(decrement)) break alpha = line_search(x, descent_dir, f, gradf, args, **kwargs) if alpha < 1e-16: if disp_conv: print("Step is too small!") x = x + alpha * descent_dir if callback is not None: callback((descent_dir, x)) iteration += 1 if disp: print("Current function val = {}".format(f(x, *args))) print("Newton decrement = {}".format(decrement)) if iteration >= max_iter: if disp_conv: print("Maxiter exceeds!") break res = {"x": x, "num_iter": iteration, "tol": decrement} return res def simple_solver(hess, A, gradient): n = hess.shape[0] n_lin_row, n_lin_col = A.shape modified_hess = np.zeros((n + n_lin_row, n + n_lin_row)) modified_hess[:n, :n] = hess modified_hess[n:n + n_lin_row, :n_lin_col] = A modified_hess[:n_lin_col, n:n + n_lin_row] = A.T rhs = np.zeros(n + n_lin_row) rhs[:n] = -gradient h = np.linalg.solve(modified_hess, rhs) return h def elimination_solver(hess, A, gradient): inv_hess_diag = np.divide(1.0, np.diag(hess)) inv_hess_grad = np.multiply(-inv_hess_diag, gradient) rhs = A.dot(inv_hess_grad) L_inv_hess = np.sqrt(inv_hess_diag) AL_inv_hess = A * L_inv_hess # print(AL_inv_hess.shape) S = AL_inv_hess.dot(AL_inv_hess.T) # cho_S = sclin.cho_factor(S) # w = sclin.cho_solve(cho_S, rhs) w = np.linalg.solve(S, rhs) v = np.subtract(inv_hess_grad, np.multiply(inv_hess_diag, A.T.dot(w))) # h = np.zeros(hess.shape[1] + A.shape[0]) # h[:hess.shape[1]] = v # h[hess.shape[1]:hess.shape[1] + A.shape[0]] = w return v def backtracking(x, descent_dir, f, grad_f, args, **kwargs): beta1 = kwargs["beta1"] rho = kwargs["rho"] alpha = 1 while f(x + alpha * descent_dir, *args) >= f(x, *args) + beta1 * alpha * grad_f(x, *args).dot(descent_dir) \ or np.isnan(f(x + alpha * descent_dir, *args)): alpha *= rho if alpha < 1e-16: break return alpha def generate_KleeMinty_test_problem(n): c = np.array([2**i for i in range(n)]) c = -c[::-1] bounds = [(0, None) for i in range(n)] b = np.array([5**(i+1) for i in range(n)]) a = np.array([1] + [2**(i+1) for i in range(1, n)]) A = np.zeros((n, n)) for i in range(n): A[i:, i] = a[:n-i] return c, A, b, bounds n = 7 c, A, b, _ = generate_KleeMinty_test_problem(n) eps = 1e-10 def f(x, c, mu): n = c.shape[0] return c.dot(x[:n]) - mu * np.sum(np.log(eps + x)) def gradf(x, c, mu): grad = np.zeros(len(x)) n = c.shape[0] grad[:n] = c - mu / (eps + x[:n]) grad[n:] = -mu / (eps + x[n:]) return grad def hessf(x, c, mu): return mu * np.diag(1. / (eps + x)**2) A_lin = np.zeros((n, n + A.shape[0])) A_lin[:n, :n] = A A_lin[:n, n:n + A.shape[0]] = np.eye(A.shape[0]) mu = 0.1
_____no_output_____
MIT
Spring2021/int_point.ipynb
Jhomanik/MIPT-Opt
Проверим верно ли вычисляется градиент
scopt.check_grad(f, gradf, np.random.rand(n), c, mu)
_____no_output_____
MIT
Spring2021/int_point.ipynb
Jhomanik/MIPT-Opt
Выбор начального приближения допустимого по ограничениям и области определения целевой функции
x0 = np.zeros(2*n) x0[:n] = np.random.rand(n) x0[n:2*n] = b - A.dot(x0[:n]) print(np.linalg.norm(A_lin.dot(x0) - b)) print(np.sum(x0 <= 1e-6))
1.1457157353758233e-13 0
MIT
Spring2021/int_point.ipynb
Jhomanik/MIPT-Opt
Проверим сходимость
hist_conv = [] def cl(x): hist_conv.append(x) res = NewtonLinConstraintsFeasible(f, gradf, hessf, A_lin, x0, backtracking, elimination_solver, (c, mu), callback=cl, max_iter=2000, beta1=0.1, rho=0.7) print("Decrement value = {}".format(res["tol"])) fstar = f(res["x"], c, mu) hist_conv_f = [np.abs(fstar - f(descdir_x[1], c, mu)) for descdir_x in hist_conv] plt.figure(figsize=(12, 5)) plt.subplot(1,2,1) plt.semilogy(hist_conv_f) plt.xlabel("Number of iteration, $k$", fontsize=18) plt.ylabel("$f^* - f_k$", fontsize=18) plt.xticks(fontsize=18) _ = plt.yticks(fontsize=18) hist_conv_x = [np.linalg.norm(res["x"] - x[1]) for x in hist_conv] plt.subplot(1,2,2) plt.semilogy(hist_conv_x) plt.xlabel("Number of iteration, $k$", fontsize=18) plt.ylabel("$\| x_k - x^*\|_2$", fontsize=18) plt.xticks(fontsize=18) _ = plt.yticks(fontsize=18) plt.tight_layout()
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
MIT
Spring2021/int_point.ipynb
Jhomanik/MIPT-Opt
Реализация барьерного метода
def BarrierPrimalLinConstr(f, gradf, hessf, A, c, x0, mu0, rho_mu, linesearch, linsys_solver, tol=1e-8, max_iter=500, disp_conv=False, **kwargs): x = x0.copy() n = x0.shape[0] mu = mu0 while True: res = NewtonLinConstraintsFeasible(f, gradf, hessf, A, x, linesearch, linsys_solver, (c, mu), disp_conv=disp_conv, max_iter=max_iter, beta1=0.01, rho=0.5) x = res["x"].copy() if n * mu < tol: break mu *= rho_mu return x mu0 = 5 rho_mu = 0.5 x = BarrierPrimalLinConstr(f, gradf, hessf, A_lin, c, x0, mu0, rho_mu, backtracking, elimination_solver, max_iter=100) %timeit BarrierPrimalLinConstr(f, gradf, hessf, A_lin, c, x0, mu0, rho_mu, backtracking, elimination_solver, max_iter=100) %timeit BarrierPrimalLinConstr(f, gradf, hessf, A_lin, c, x0, mu0, rho_mu, backtracking, simple_solver, max_iter=100) print(x[:n])
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log /Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
MIT
Spring2021/int_point.ipynb
Jhomanik/MIPT-Opt
Сравнение времени работы
mu0 = 2 rho_mu = 0.5 n_list = range(3, 10) n_iters = np.zeros(len(n_list)) times_simplex = np.zeros(len(n_list)) times_barrier_simple = np.zeros(len(n_list)) for i, n in enumerate(n_list): print("Current dimension = {}".format(n)) c, A, b, bounds = generate_KleeMinty_test_problem(n) time = %timeit -o -q scopt.linprog(c, A, b, bounds=bounds, options={"maxiter": 2**max(n_list) + 1}, method="simplex") times_simplex[i] = time.best A_lin = np.zeros((n, n + A.shape[0])) A_lin[:n, :n] = A A_lin[:n, n:n + A.shape[0]] = np.eye(A.shape[0]) x0 = np.zeros(2*n) x0[:n] = np.random.rand(n) x0[n:2*n] = b - A.dot(x0[:n]) time = %timeit -o -q BarrierPrimalLinConstr(f, gradf, hessf, A_lin, c, x0, mu0, rho_mu, backtracking, simple_solver) times_barrier_simple[i] = time.best plt.figure(figsize=(8, 5)) plt.semilogy(n_list, times_simplex, label="Simplex") plt.semilogy(n_list, times_barrier_simple, label="Primal barrier") plt.legend(fontsize=18) plt.xlabel("Dimension, $n$", fontsize=18) plt.ylabel("Computation time, sec.", fontsize=18) plt.xticks(fontsize=18) _ = plt.yticks(fontsize=18)
_____no_output_____
MIT
Spring2021/int_point.ipynb
Jhomanik/MIPT-Opt
---title: "kNN-algorithm"author: "Palaniappan S"date: 2020-09-05description: "-"type: technical_notedraft: false---
# importing required libraries import pandas as pd from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import accuracy_score # read the train and test dataset train_data = pd.read_csv('train-data.csv') test_data = pd.read_csv('test-data.csv') print(train_data.head()) # shape of the dataset print('Shape of training data :',train_data.shape) print('Shape of testing data :',test_data.shape) train_x = train_data.drop(columns=['Survived'],axis=1) train_y = train_data['Survived'] # seperate the independent and target variable on testing data test_x = test_data.drop(columns=['Survived'],axis=1) test_y = test_data['Survived'] model = KNeighborsClassifier() # fit the model with the training data model.fit(train_x,train_y) # Number of Neighbors used to predict the target print('\nThe number of neighbors used to predict the target : ',model.n_neighbors) # predict the target on the train dataset predict_train = model.predict(train_x) print('Target on train data',predict_train) # Accuray Score on train dataset accuracy_train = accuracy_score(train_y,predict_train) print('accuracy_score on train dataset : ', accuracy_train) # predict the target on the test dataset predict_test = model.predict(test_x) print('Target on test data',predict_test) # Accuracy Score on test dataset accuracy_test = accuracy_score(test_y,predict_test) print('accuracy_score on test dataset : ', accuracy_test)
Target on test data [0 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 1 1 0 0 0 1 0 0 1 1 1 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 1 0 1 0 0 0 1 0 1 1 0 1 1 0 0 1 0 0 1 0 1 0 0 1 0 1 0 1 1 0 1 0 0 1 1 0 0 1 0 0 0 1 1 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 1 0 1 0 1 0 0 1 0 0 0 0 0] accuracy_score on test dataset : 0.7150837988826816
MIT
content/python/ml_algorithms/.ipynb_checkpoints/kNN-algorithm-checkpoint.ipynb
Palaniappan12345/mlnotes
Tutorial Part 2: Learning MNIST Digit ClassifiersIn the previous tutorial, we learned some basics of how to load data into DeepChem and how to use the basic DeepChem objects to load and manipulate this data. In this tutorial, you'll put the parts together and learn how to train a basic image classification model in DeepChem. You might ask, why are we bothering to learn this material in DeepChem? Part of the reason is that image processing is an increasingly important part of AI for the life sciences. So learning how to train image processing models will be very useful for using some of the more advanced DeepChem features.The MNIST dataset contains handwritten digits along with their human annotated labels. The learning challenge for this dataset is to train a model that maps the digit image to its true label. MNIST has been a standard benchmark for machine learning for decades at this point. ![MNIST](https://github.com/deepchem/deepchem/blob/master/examples/tutorials/mnist_examples.png?raw=1) ColabThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/02_Learning_MNIST_Digit_Classifiers.ipynb) SetupWe recommend running this tutorial on Google colab. You'll need to run the following cell of installation commands on Colab to get your environment set up. If you'd rather run the tutorial locally, make sure you don't run these commands (since they'll download and install a new Anaconda python setup)
%tensorflow_version 1.x !curl -Lo deepchem_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import deepchem_installer %time deepchem_installer.install(version='2.3.0') from tensorflow.examples.tutorials.mnist import input_data # TODO: This is deprecated. Let's replace with a DeepChem native loader for maintainability. mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) import deepchem as dc import tensorflow as tf from tensorflow.keras.layers import Reshape, Conv2D, Flatten, Dense, Softmax train = dc.data.NumpyDataset(mnist.train.images, mnist.train.labels) valid = dc.data.NumpyDataset(mnist.validation.images, mnist.validation.labels) keras_model = tf.keras.Sequential([ Reshape((28, 28, 1)), Conv2D(filters=32, kernel_size=5, activation=tf.nn.relu), Conv2D(filters=64, kernel_size=5, activation=tf.nn.relu), Flatten(), Dense(1024, activation=tf.nn.relu), Dense(10), Softmax() ]) model = dc.models.KerasModel(keras_model, dc.models.losses.CategoricalCrossEntropy()) model.fit(train, nb_epoch=2) from sklearn.metrics import roc_curve, auc import numpy as np print("Validation") prediction = np.squeeze(model.predict_on_batch(valid.X)) fpr = dict() tpr = dict() roc_auc = dict() for i in range(10): fpr[i], tpr[i], thresh = roc_curve(valid.y[:, i], prediction[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) print("class %s:auc=%s" % (i, roc_auc[i]))
Validation class 0:auc=0.9999482812520925 class 1:auc=0.9999327470315621 class 2:auc=0.9999223382455529 class 3:auc=0.9999378924197698 class 4:auc=0.999804920932277 class 5:auc=0.9997608046652174 class 6:auc=0.9999347825797615 class 7:auc=0.9997099080694587 class 8:auc=0.999882187740275 class 9:auc=0.9996286953889618
MIT
examples/tutorials/02_Learning_MNIST_Digit_Classifiers.ipynb
martonlanga/deepchem
_*H2 energy plot comparing full to particle hole transformations*_This notebook demonstrates using Qiskit Chemistry to plot graphs of the ground state energy of the Hydrogen (H2) molecule over a range of inter-atomic distances using VQE and UCCSD with full and particle hole transformations. It is compared to the same energies as computed by the ExactEigensolverThis notebook populates a dictionary, that is a progammatic representation of an input file, in order to drive the Qiskit Chemistry stack. Such a dictionary can be manipulated programmatically and this is indeed the case here where we alter the molecule supplied to the driver in each loop.This notebook has been written to use the PYQUANTE chemistry driver. See the PYQUANTE chemistry driver readme if you need to install the external PyQuante2 library that this driver requires.
import numpy as np import pylab from qiskit_chemistry import QiskitChemistry # Input dictionary to configure Qiskit Chemistry for the chemistry problem. qiskit_chemistry_dict = { 'problem': {'random_seed': 50}, 'driver': {'name': 'PYQUANTE'}, 'PYQUANTE': {'atoms': '', 'basis': 'sto3g'}, 'operator': {'name': 'hamiltonian', 'qubit_mapping': 'jordan_wigner', 'two_qubit_reduction': False}, 'algorithm': {'name': ''}, 'optimizer': {'name': 'COBYLA', 'maxiter': 10000 }, 'variational_form': {'name': 'UCCSD'}, 'initial_state': {'name': 'HartreeFock'} } molecule = 'H .0 .0 -{0}; H .0 .0 {0}' algorithms = ['VQE', 'ExactEigensolver'] transformations = ['full', 'particle_hole'] start = 0.5 # Start distance by = 0.5 # How much to increase distance by steps = 20 # Number of steps to increase by energies = np.empty([len(transformations), len(algorithms), steps+1]) hf_energies = np.empty(steps+1) distances = np.empty(steps+1) eval_counts = np.empty([len(transformations), steps+1]) print('Processing step __', end='') for i in range(steps+1): print('\b\b{:2d}'.format(i), end='', flush=True) d = start + i*by/steps qiskit_chemistry_dict['PYQUANTE']['atoms'] = molecule.format(d/2) for j in range(len(algorithms)): qiskit_chemistry_dict['algorithm']['name'] = algorithms[j] for k in range(len(transformations)): qiskit_chemistry_dict['operator']['transformation'] = transformations[k] solver = QiskitChemistry() result = solver.run(qiskit_chemistry_dict) energies[k][j][i] = result['energy'] hf_energies[i] = result['hf_energy'] if algorithms[j] == 'VQE': eval_counts[k][i] = result['algorithm_retvals']['eval_count'] distances[i] = d print(' --- complete') print('Distances: ', distances) print('Energies:', energies) print('Hartree-Fock energies:', hf_energies) print('VQE num evaluations:', eval_counts) pylab.plot(distances, hf_energies, label='Hartree-Fock') for j in range(len(algorithms)): for k in range(len(transformations)): pylab.plot(distances, energies[k][j], label=algorithms[j]+' + '+transformations[k]) pylab.xlabel('Interatomic distance') pylab.ylabel('Energy') pylab.title('H2 Ground State Energy') pylab.legend(loc='upper right') pylab.plot(distances, np.subtract(hf_energies, energies[0][1]), label='Hartree-Fock') for k in range(len(transformations)): pylab.plot(distances, np.subtract(energies[k][0], energies[k][1]), label='VQE + '+transformations[k]) pylab.xlabel('Interatomic distance') pylab.ylabel('Energy') pylab.title('Energy difference from ExactEigensolver') pylab.legend(loc='upper left') for k in range(len(transformations)): pylab.plot(distances, eval_counts[k], '-o', label='VQE + ' + transformations[k]) pylab.xlabel('Interatomic distance') pylab.ylabel('Evaluations') pylab.title('VQE number of evaluations') pylab.legend(loc='upper left')
_____no_output_____
Apache-2.0
community/aqua/chemistry/h2_particle_hole.ipynb
Chibikuri/qiskit-tutorials
OT for domain adaptation on empirical distributionsThis example introduces a domain adaptation in a 2D setting. It explicitsthe problem of domain adaptation and introduces some optimal transportapproaches to solve it.Quantities such as optimal couplings, greater coupling coefficients andtransported samples are represented in order to give a visual understandingof what the transport methods are doing.
# Authors: Remi Flamary <[email protected]> # Stanislas Chambon <[email protected]> # # License: MIT License import matplotlib.pylab as pl import ot import ot.plot
_____no_output_____
MIT
_downloads/b1ca754f39005f3188ba9b4423f688b0/plot_otda_d2.ipynb
FlopsKa/pythonot.github.io
generate data-------------
n_samples_source = 150 n_samples_target = 150 Xs, ys = ot.datasets.make_data_classif('3gauss', n_samples_source) Xt, yt = ot.datasets.make_data_classif('3gauss2', n_samples_target) # Cost matrix M = ot.dist(Xs, Xt, metric='sqeuclidean')
_____no_output_____
MIT
_downloads/b1ca754f39005f3188ba9b4423f688b0/plot_otda_d2.ipynb
FlopsKa/pythonot.github.io
Instantiate the different transport algorithms and fit them-----------------------------------------------------------
# EMD Transport ot_emd = ot.da.EMDTransport() ot_emd.fit(Xs=Xs, Xt=Xt) # Sinkhorn Transport ot_sinkhorn = ot.da.SinkhornTransport(reg_e=1e-1) ot_sinkhorn.fit(Xs=Xs, Xt=Xt) # Sinkhorn Transport with Group lasso regularization ot_lpl1 = ot.da.SinkhornLpl1Transport(reg_e=1e-1, reg_cl=1e0) ot_lpl1.fit(Xs=Xs, ys=ys, Xt=Xt) # transport source samples onto target samples transp_Xs_emd = ot_emd.transform(Xs=Xs) transp_Xs_sinkhorn = ot_sinkhorn.transform(Xs=Xs) transp_Xs_lpl1 = ot_lpl1.transform(Xs=Xs)
_____no_output_____
MIT
_downloads/b1ca754f39005f3188ba9b4423f688b0/plot_otda_d2.ipynb
FlopsKa/pythonot.github.io
Fig 1 : plots source and target samples + matrix of pairwise distance---------------------------------------------------------------------
pl.figure(1, figsize=(10, 10)) pl.subplot(2, 2, 1) pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples') pl.xticks([]) pl.yticks([]) pl.legend(loc=0) pl.title('Source samples') pl.subplot(2, 2, 2) pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples') pl.xticks([]) pl.yticks([]) pl.legend(loc=0) pl.title('Target samples') pl.subplot(2, 2, 3) pl.imshow(M, interpolation='nearest') pl.xticks([]) pl.yticks([]) pl.title('Matrix of pairwise distances') pl.tight_layout()
_____no_output_____
MIT
_downloads/b1ca754f39005f3188ba9b4423f688b0/plot_otda_d2.ipynb
FlopsKa/pythonot.github.io
Fig 2 : plots optimal couplings for the different methods---------------------------------------------------------
pl.figure(2, figsize=(10, 6)) pl.subplot(2, 3, 1) pl.imshow(ot_emd.coupling_, interpolation='nearest') pl.xticks([]) pl.yticks([]) pl.title('Optimal coupling\nEMDTransport') pl.subplot(2, 3, 2) pl.imshow(ot_sinkhorn.coupling_, interpolation='nearest') pl.xticks([]) pl.yticks([]) pl.title('Optimal coupling\nSinkhornTransport') pl.subplot(2, 3, 3) pl.imshow(ot_lpl1.coupling_, interpolation='nearest') pl.xticks([]) pl.yticks([]) pl.title('Optimal coupling\nSinkhornLpl1Transport') pl.subplot(2, 3, 4) ot.plot.plot2D_samples_mat(Xs, Xt, ot_emd.coupling_, c=[.5, .5, 1]) pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples') pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples') pl.xticks([]) pl.yticks([]) pl.title('Main coupling coefficients\nEMDTransport') pl.subplot(2, 3, 5) ot.plot.plot2D_samples_mat(Xs, Xt, ot_sinkhorn.coupling_, c=[.5, .5, 1]) pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples') pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples') pl.xticks([]) pl.yticks([]) pl.title('Main coupling coefficients\nSinkhornTransport') pl.subplot(2, 3, 6) ot.plot.plot2D_samples_mat(Xs, Xt, ot_lpl1.coupling_, c=[.5, .5, 1]) pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples') pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples') pl.xticks([]) pl.yticks([]) pl.title('Main coupling coefficients\nSinkhornLpl1Transport') pl.tight_layout()
_____no_output_____
MIT
_downloads/b1ca754f39005f3188ba9b4423f688b0/plot_otda_d2.ipynb
FlopsKa/pythonot.github.io
Fig 3 : plot transported samples--------------------------------
# display transported samples pl.figure(4, figsize=(10, 4)) pl.subplot(1, 3, 1) pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples', alpha=0.5) pl.scatter(transp_Xs_emd[:, 0], transp_Xs_emd[:, 1], c=ys, marker='+', label='Transp samples', s=30) pl.title('Transported samples\nEmdTransport') pl.legend(loc=0) pl.xticks([]) pl.yticks([]) pl.subplot(1, 3, 2) pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples', alpha=0.5) pl.scatter(transp_Xs_sinkhorn[:, 0], transp_Xs_sinkhorn[:, 1], c=ys, marker='+', label='Transp samples', s=30) pl.title('Transported samples\nSinkhornTransport') pl.xticks([]) pl.yticks([]) pl.subplot(1, 3, 3) pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples', alpha=0.5) pl.scatter(transp_Xs_lpl1[:, 0], transp_Xs_lpl1[:, 1], c=ys, marker='+', label='Transp samples', s=30) pl.title('Transported samples\nSinkhornLpl1Transport') pl.xticks([]) pl.yticks([]) pl.tight_layout() pl.show()
_____no_output_____
MIT
_downloads/b1ca754f39005f3188ba9b4423f688b0/plot_otda_d2.ipynb
FlopsKa/pythonot.github.io
Air RoutesThe examples in this notebook demonstrate using the GremlinPython library to connect to and work with a Neptune instance. Using a Jupyter notebook in this way provides a nice way to interact with your Neptune graph database in a familiar and instantly productive environment. Load the Air Routes datasetWhen the SageMaker notebook instance was created the appropriate Python libraries for working with a Tinkerpop enabled graph were installed. We now need to `import` some classes from those libraries before connecting to our Neptune instance, loading some sample data, and running queries. The `neptune.py` helper module that was installed in the _util_ directory does all the necessary heavy lifting with regard to importing classes and loading the air routes dataset. You can reuse this module in your own notebooks, or consult its source code to see how to configure GremlinPython.
%run '../util/neptune.py'
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Using the neptune module, we can clear any existing data from the database, and load the air routes graph:
neptune.clear() neptune.bulkLoad('s3://aws-neptune-customer-samples-${AWS_REGION}/neptune-sagemaker/data/let-me-graph-that-for-you/01-air-routes/', interval=5)
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Establish access to our Neptune instanceBefore we can work with our graph we need to establish a connection to it. This is done using the `DriverRemoteConnection` capability as defined by Apache TinkerPop and supported by GremlinPython. The `neptune.py` helper module facilitates creating this connection.Once this cell has been run we will be able to use the variable `g` to refer to our graph in Gremlin queries in subsequent cells. By default Neptune uses port 8182 and that is what we connect to below. When you configure your own Neptune instance you can you choose a different endpoint and port number by specifiying the `neptune_endpoint` and `neptune_port` parameters to the `graphTraversal()` method.
g = neptune.graphTraversal()
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Let's find out a bit about the graphLet's start off with a simple query just to make sure our connection to Neptune is working. The queries below look at all of the vertices and edges in the graph and create two maps that show the demographic of the graph. As we are using the air routes data set, not surprisingly, the values returned are related to airports and routes.
vertices = g.V().groupCount().by(T.label).toList() edges = g.E().groupCount().by(T.label).toList() print(vertices) print(edges)
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Find routes longer than 8,400 milesThe query below finds routes in the graph that are longer than 8,400 miles. This is done by examining the `dist` property of the `routes` edges in the graph. Having found some edges that meet our criteria we sort them in descending order by distance. The `where` step filters out the reverse direction routes for the ones that we have already found beacuse we do not, in this case, want two results for each route. As an experiment, try removing the `where` line and observe the additional results that are returned. Lastly we generate some `path` results using the airport codes and route distances. Notice how we have laid the Gremlin query out over multiple lines to make it easier to read. To avoid errors, when you lay out a query in this way using Python, each line must end with a backslash character "\".The results from running the query will be placed into the variable `paths`. Notice how we ended the Gremlin query with a call to `toList`. This tells Gremlin that we want our results back in a list. We can then use a Python `for` loop to print those results. Each entry in the list will itself be a list containing the starting airport code, the length of the route and the destination airport code.
paths = g.V().hasLabel('airport').as_('a') \ .outE('route').has('dist',gt(8400)) \ .order().by('dist',Order.decr) \ .inV() \ .where(P.lt('a')).by('code') \ .path().by('code').by('dist').by('code').toList() for p in paths: print(p)
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Draw a Bar Chart that represents the routes we just found.One of the nice things about using Python to work with our graph is that we can take advantage of the larger Python ecosystem of libraries such as `matplotlib`, `numpy` and `pandas` to further analyze our data and represent it pictorially. So, now that we have found some long airline routes we can build a bar chart that represents them graphically.
import matplotlib.pyplot as plt; plt.rcdefaults() import numpy as np import matplotlib.pyplot as plt import pandas as pd routes = list() dist = list() # Construct the x-axis labels by combining the airport pairs we found # into strings with with a "-" between them. We also build a list containing # the distance values that will be used to construct and label the bars. for i in range(len(paths)): routes.append(paths[i][0] + '-' + paths[i][2]) dist.append(paths[i][1]) # Setup everything we need to draw the chart y_pos = np.arange(len(routes)) y_labels = (0,1000,2000,3000,4000,5000,6000,7000,8000,9000) freq_series = pd.Series(dist) plt.figure(figsize=(11,6)) fs = freq_series.plot(kind='bar') fs.set_xticks(y_pos, routes) fs.set_ylabel('Miles') fs.set_title('Longest routes') fs.set_yticklabels(y_labels) fs.set_xticklabels(routes) fs.yaxis.set_ticks(np.arange(0, 10000, 1000)) fs.yaxis.set_ticklabels(y_labels) # Annotate each bar with the distance value for i in range(len(paths)): fs.annotate(dist[i],xy=(i,dist[i]+60),xycoords='data',ha='center') # We are finally ready to draw the bar chart plt.show()
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Explore the distribution of airports by continentThe next example queries the graph to find out how many airports are in each continent. The query starts by finding all vertices that are continents. Next, those vertices are grouped, which creates a map (or dict) whose keys are the continent descriptions and whose values represent the counts of the outgoing edges with a 'contains' label. Finally the resulting map is sorted using the keys in ascending order. That result is then returned to our Python code as the variable `m`. Finally we can print the map nicely using regular Python concepts.
# Return a map where the keys are the continent names and the values are the # number of airports in that continent. m = g.V().hasLabel('continent') \ .group().by('desc').by(__.out('contains').count()) \ .order(Scope.local).by(Column.keys) \ .next() for c,n in m.items(): print('%4d %s' %(n,c))
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Draw a pie chart representing the distribution by continentRather than return the results as text like we did above, it might be nicer to display them as percentages on a pie chart. That is what the code in the next cell does. Rather than return the descriptions of the continents (their names) this time our Gremlin query simply retrieves the two digit character code representing each continent.
import matplotlib.pyplot as plt; plt.rcdefaults() import numpy as np # Return a map where the keys are the continent codes and the values are the # number of airports in that continent. m = g.V().hasLabel('continent').group().by('code').by(__.out().count()).next() fig,pie1 = plt.subplots() pie1.pie(m.values() \ ,labels=m.keys() \ ,autopct='%1.1f%%'\ ,shadow=True \ ,startangle=90 \ ,explode=(0,0,0.1,0,0,0,0)) pie1.axis('equal') plt.show()
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Find some routes from London to San Jose and draw themOne of the nice things about connected graph data is that it lends itself nicely to visualization that people can get value from looking at. The Python `networkx` library makes it fairly easy to draw a graph. The next example takes advantage of this capability to draw a directed graph (DiGraph) of a few airline routes.The query below starts by finding the vertex that represents London Heathrow (LHR). It then finds 15 routes from LHR that end up in San Jose California (SJC) with one stop on the way. Those routes are returned as a list of paths. Each path will contain the three character IATA codes representing the airports found.The main purpose of this example is to show that we can easily extract part of a larger graph and render it graphically in a way that is easy for an end user to comprehend.
import matplotlib.pyplot as plt; plt.rcdefaults() import numpy as np import matplotlib.pyplot as plt import pandas as pd import networkx as nx # Find up to 15 routes from LHR to SJC that make one stop. paths = g.V().has('airport','code','LHR') \ .out().out().has('code','SJC').limit(15) \ .path().by('code').toList() # Create a new empty DiGraph G=nx.DiGraph() # Add the routes we found to DiGraph we just created for p in paths: G.add_edge(p[0],p[1]) G.add_edge(p[1],p[2]) # Give the starting and ending airports a different color colors = [] for label in G: if label in['LHR','SJC']: colors.append('yellow') else: colors.append('#11cc77') # Now draw the graph plt.figure(figsize=(5,5)) nx.draw(G, node_color=colors, node_size=1200, with_labels=True) plt.show()
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
PART 2 - Examples that use iPython GremlinThis part of the notebook contains examples that use the iPython Gremlin Jupyter extension to work with a Neptune instance using Gremlin. Configuring iPython Gremlin to work with NeptuneBefore we can start to use iPython Gremlin we need to load the Jupyter Kernel extension and configure access to our Neptune endpoint.
# Create a string containing the full Web Socket path to the endpoint # Replace <neptune-instance-name> with the name of your Neptune instance. # which will be of the form myinstance.us-east-1.neptune.amazonaws.com #neptune_endpoint = '<neptune-instance-name>' import os neptune_endpoint = os.environ['NEPTUNE_CLUSTER_ENDPOINT'] neptune_port = os.environ['NEPTUNE_CLUSTER_PORT'] neptune_gremlin_endpoint = 'wss://' + neptune_endpoint + ':' + neptune_port + '/gremlin' # Load the iPython Gremlin extension and setup access to Neptune. %load_ext gremlin %gremlin.connection.set_current $neptune_gremlin_endpoint
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Run this cell if you need to reload the Gremlin extension.Occaisionally it becomes necessary to reload the iPython Gremlin extension to make things work. Running this cell will do that for you.
# Re-load the iPython Gremlin Jupyter Kernel extension. %reload_ext gremlin
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
A simple query to make sure we can connect to the graph. Find all the airports in England that are in London. Notice that when using iPython Gremlin you do not need to use a terminal step such as `next` or `toList` at the end of the query in order to get it to return results. As mentioned earlier in this post, the `%reset -f` is to work around a known issue with iPython Gremlin.
%reset -f %gremlin g.V().has('airport','region','GB-ENG') \ .has('city','London').values('desc')
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
You can store the results of a query in a variable just as when using Gremlin Python.The query below is the same as the previous one except that the results of running the query are stored in the variable 'places'. We can then work with that variable in our code.
%reset -f places = %gremlin g.V().has('airport','region','GB-ENG') \ .has('city','London').values('desc') for p in places: print(p)
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Treating entire cells as GremlinAny cell that begins with `%%gremlin` tells iPython Gremlin to treat the entire cell as Gremlin. You cannot mix Python code into these cells.
%%gremlin g.V().has('city','London').has('region','GB-ENG').count()
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
LOAD DESIRED MODEL
#load certain model model = load_model('./for_old22/reverse_MFCC_Dense_Classifier_l-3_u-512_e-1000_1588062326.h5') # plot_model(model, to_file='reverse_MFCC_Dense_Classifier_model.png', show_shapes=True,show_layer_names=True)
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
LOAD TEST DATA
#read test dataset from csv # librispeech data5_unseen_10 = pd.read_csv('D:/Users/MC/Documents/UNI/MASTER/thesis/MFCC_FEATURES2/reverse_Mel_scale/data5_unseen_10ms_R.csv') data5_unseen_50 = pd.read_csv('D:/Users/MC/Documents/UNI/MASTER/thesis/MFCC_FEATURES2/reverse_Mel_scale/data5_unseen_50ms_R.csv') data5_unseen_100 = pd.read_csv('D:/Users/MC/Documents/UNI/MASTER/thesis/MFCC_FEATURES2/reverse_Mel_scale/data5_unseen_100ms_R.csv') data5_unseen_500 = pd.read_csv('D:/Users/MC/Documents/UNI/MASTER/thesis/MFCC_FEATURES2/reverse_Mel_scale/data5_unseen_500ms_R.csv') data5_unseen_1000 = pd.read_csv('D:/Users/MC/Documents/UNI/MASTER/thesis/MFCC_FEATURES2/reverse_Mel_scale/data5_unseen_1000ms_R.csv') # musan #music data6_unseen_10 = pd.read_csv('D:/Users/MC/Documents/UNI/MASTER/thesis/MFCC_FEATURES2/reverse_Mel_scale/data6_unseen_10ms_R.csv') data6_unseen_50 = pd.read_csv('D:/Users/MC/Documents/UNI/MASTER/thesis/MFCC_FEATURES2/reverse_Mel_scale/data6_unseen_50ms_R.csv') data6_unseen_100 = pd.read_csv('D:/Users/MC/Documents/UNI/MASTER/thesis/MFCC_FEATURES2/reverse_Mel_scale/data6_unseen_100ms_R.csv') data6_unseen_500 = pd.read_csv('D:/Users/MC/Documents/UNI/MASTER/thesis/MFCC_FEATURES2/reverse_Mel_scale/data6_unseen_500ms_R.csv') data6_unseen_1000 = pd.read_csv('D:/Users/MC/Documents/UNI/MASTER/thesis/MFCC_FEATURES2/reverse_Mel_scale/data6_unseen_1000ms_R.csv') #speech data7_10 = pd.read_csv('D:/Users/MC/Documents/UNI/MASTER/thesis/MFCC_FEATURES2/reverse_Mel_scale/data7_unseen_10ms_R.csv') data7_50 = pd.read_csv('D:/Users/MC/Documents/UNI/MASTER/thesis/MFCC_FEATURES2/reverse_Mel_scale/data7_unseen_50ms_R.csv') data7_100 = pd.read_csv('D:/Users/MC/Documents/UNI/MASTER/thesis/MFCC_FEATURES2/reverse_Mel_scale/data7_unseen_100ms_R.csv') data7_500 = pd.read_csv('D:/Users/MC/Documents/UNI/MASTER/thesis/MFCC_FEATURES2/reverse_Mel_scale/data7_unseen_500ms_R.csv') data7_1000 = pd.read_csv('D:/Users/MC/Documents/UNI/MASTER/thesis/MFCC_FEATURES2/reverse_Mel_scale/data7_unseen_1000ms_R.csv')
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
GET TOTAL NUMBER OF FILES PER TYPE i.e. get number of entries per dataset (5,6) OR number of entries per IR-length (10,50,100,500,1000)
investigate_differencess_between_datasets = 1 # else investigate between IR lenght #aggregate all data if investigate_differencess_between_datasets: L5 = len(data5_unseen_10) + len(data5_unseen_50) + len(data5_unseen_100) + len(data5_unseen_500) + len(data5_unseen_1000) L6 = len(data6_unseen_10) + len(data6_unseen_50) + len(data6_unseen_100) + len(data6_unseen_500) + len(data6_unseen_1000) L7 = len(data7_10) + len(data7_50) + len(data7_100) + len(data7_500) + len(data7_1000) print(f'number of music samples: {L6}') print(f'number of speech samples: {L5+L7} \tof which {L5} are from Librispeech and {L7} are from Musan') data = pd.concat([data5_unseen_10, data5_unseen_50, data5_unseen_100, data5_unseen_500, data5_unseen_1000, data6_unseen_10, data6_unseen_50, data6_unseen_100, data6_unseen_500, data6_unseen_1000, data7_10, data7_50, data7_100, data7_500, data7_1000]) else: L_10 = len(data5_unseen_10) + len(data6_unseen_10) + len(data7_10) L_50 = len(data5_unseen_50) + len(data6_unseen_50) + len(data7_50) L_100 = len(data5_unseen_100) + len(data6_unseen_100) + len(data7_100) L_500 = len(data5_unseen_500) + len(data6_unseen_500) + len(data7_500) L_1000 = len(data5_unseen_1000) + len(data6_unseen_1000) + len(data7_1000) print(f'number of IR_10ms samples: {L_10}') print(f'number of IR_50ms samples: {L_50}') print(f'number of IR_100ms samples: {L_100}') print(f'number of IR_500ms samples: {L_500}') print(f'number of IR_1000ms samples: {L_1000}') data = pd.concat([data5_unseen_10, data6_unseen_10, data7_10, data5_unseen_50, data6_unseen_50, data7_50, data5_unseen_100, data6_unseen_100, data7_100, data5_unseen_500, data6_unseen_500, data7_500, data5_unseen_1000, data6_unseen_1000, data7_1000]) print() print(f'number of rows: {data.shape[0]}') #randomly display some of the data print('random selection of rows:') data_subset = data.sample(n=5) data_subset.head()
number of music samples: 15800 number of speech samples: 16000 of which 10000 are from Librispeech and 6000 are from Musan number of rows: 31800 random selection of rows:
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
PREPARING DATA
#dropping unneccesary columns and storing filenames elsewhere fileNames = data['filename'] data = data.drop(['filename'],axis=1) # function to reduce label resolution from every 9° to 4 quadrants def reduce_Resolution(old_data): new_data = old_data.iloc[:, -1] new_label_list = pd.DataFrame(new_data) for i in range(len(new_data)): if 0 <= new_data.iloc[i] < 90: new_label_list.iloc[i] = 0 if 90 <= new_data.iloc[i] < 180: new_label_list.iloc[i] = 1 if 180 <= new_data.iloc[i] < 270: new_label_list.iloc[i] = 2 if 270 <= new_data.iloc[i] < 360: new_label_list.iloc[i] = 3 return new_label_list #making labels labels_list = data.iloc[:, -1] # labels_list = reduce_Resolution(data) encoder = LabelEncoder() y = encoder.fit_transform(labels_list) print(f'labels are: {y}') # normalizing scaler = StandardScaler() X = scaler.fit_transform(np.array(data.iloc[:, :-1], dtype = float))
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
MAKE PREDICTIONS AND EVALUATE
#make prediction for each sample in X and evaluate entire model to get an idea of accuracy predictions = model.predict(X) final_predictions = np.argmax(predictions,axis=1) test_loss, test_acc = model.evaluate(X,y)
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
COMPUTE SOME GENERAL STATISTICS
#method to get difference between elements on circular scale def absolute_diff(int1,int2): m_min = min(int1,int2) m_max = max(int1,int2) diff1 = m_max-m_min diff2 = m_min + 40 - m_max return diff1 if diff1 <=20 else diff2 ##COMPUTE STATISTICS labels = y predictions = predictions #check which errors occur occuring_errors = np.zeros(21) #check which direction are misclassified most often hardest_to_predict = np.zeros(40) #check what type of files make misclassification indexes_of_misclassifications = [] misclassifications = [] #check what type of files make the worst misclassifications indexes_of_grave_misclassifications = [] grave_misclassifications = [] #check which datasets produces what type of errors all_errors_5 = np.zeros(21) all_errors_6 = np.zeros(21) all_errors_7 = np.zeros(21) all_errors_10 = np.zeros(21) all_errors_50 = np.zeros(21) all_errors_100 = np.zeros(21) all_errors_500 = np.zeros(21) all_errors_1000 = np.zeros(21) #correct direction all_correct = np.zeros(40) sum_correct = 0 for i in range(final_predictions.shape[0]): label = labels[i] predicted = final_predictions[i] error = absolute_diff(predicted,label) occuring_errors[error] = occuring_errors[error] + 1 if error != 0: hardest_to_predict[label] += 1 indexes_of_misclassifications.append(i) misclassifications.append(fileNames.iloc[i]) else : all_correct[label] += 1 sum_correct += 1 if error > 5: indexes_of_grave_misclassifications.append(i) grave_misclassifications.append(fileNames.iloc[i]) if investigate_differencess_between_datasets: if 0 <= i < L5: all_errors_5[error] += 1 elif L5 <= i < L5 + L6: all_errors_6[error] += 1 elif L5 + L6 <= i < L5 + L6 + L7: all_errors_7[error] += 1 else: if 0 <= i < L_10: all_errors_10[error] += 1 elif L_10 <= i < L_10 + L_50: all_errors_50[error] += 1 elif L_10 + L_50 <= i < L_10 + L_50 + L_100: all_errors_100[error] += 1 elif L_10 + L_50 + L_100 <= i < L_10 + L_50 + L_100 + L_500: all_errors_500[error] += 1 elif L_10 + L_50 + L_100 + L_500 <= i < L_10 + L_50 + L_100 + L_500 + L_1000: all_errors_1000[error] += 1 avg_occuring_errors = occuring_errors/(labels.shape[0]) # avg_hardest_to_predict = hardest_to_predict/(labels.shape[0]) avg_hardest_to_predict = hardest_to_predict/(labels.shape[0]-sum_correct) if investigate_differencess_between_datasets: avg_errors_5 = all_errors_5/L5 avg_errors_6 = all_errors_6/L6 avg_errors_7 = all_errors_7/L7 AVG_errors_5 = all_errors_5/(labels.shape[0]) AVG_errors_6 = all_errors_6/(labels.shape[0]) AVG_errors_7 = all_errors_7/(labels.shape[0]) else : avg_errors_10 = all_errors_10/L_10 avg_errors_50 = all_errors_50/L_50 avg_errors_100 = all_errors_100/L_100 avg_errors_500 = all_errors_500/L_500 avg_errors_1000 = all_errors_1000/L_1000 AVG_errors_10 = all_errors_10/(labels.shape[0]) AVG_errors_50 = all_errors_50/(labels.shape[0]) AVG_errors_100 = all_errors_100/(labels.shape[0]) AVG_errors_500 = all_errors_500/(labels.shape[0]) AVG_errors_1000 = all_errors_1000/(labels.shape[0]) hardest_direction = np.argmax(avg_hardest_to_predict) indexes_of_hardes_direction = np.where(labels==hardest_direction) hardest_direction_confusion = np.zeros(40) hardest_direction_start_index = indexes_of_hardes_direction[0][0] hardest_direction_end_index = indexes_of_hardes_direction[0][-1] #iterate over all predictions that should have predicted 'hardest_direction' and store what they actually predicted for i in range(indexes_of_hardes_direction[0][0],indexes_of_hardes_direction[0][-1]): predicted = np.argmax(predictions[i]) hardest_direction_confusion[predicted] += 1 avg_hardest_direction_confusion = hardest_direction_confusion / (hardest_direction_end_index-hardest_direction_start_index) #compute confusion matrix confusion_array = confusion_matrix(y,final_predictions) #true,#predicted #compute confusion matrix if labels can be off by 27° tolerated_error_d = 27#degrees print(f'tolerated error is {tolerated_error_d}°') tolerated_error = int(tolerated_error_d/9) tolerated_final_predictions = final_predictions for i in range(final_predictions.shape[0]): predicition = final_predictions[i] label = y[i] error = absolute_diff(predicition,label) if error < tolerated_error: tolerated_final_predictions[i] = label tolerated_confusion_array = confusion_matrix(y, tolerated_final_predictions)
tolerated error is 27°
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
PLOT STATISTICS
#ERROR OCCURENCE x_as = np.array(range(21)) plt.bar(x_as,avg_occuring_errors) plt.title('reverse model: average error occurrence on unseen data') plt.ylabel('%') plt.ylim([0,0.5]) plt.xlabel('error [°]') plt.xticks([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20], [ 0, 18, 36, 54, 72, 90, 108, 126, 144, 162, 180]) save_fig_file_path = 'D:/Users/MC/Documents/UNI/MASTER/thesis/SCRIPTURE_FIGURES/H6/rev_model_error_unseen_data' plt.savefig(f'{save_fig_file_path}.png') plt.show(); error_27 = np.sum(avg_occuring_errors[0:3]) print(f'{int(error_27*100)}% of predictions are correct within 27°') error_45 = np.sum(avg_occuring_errors[0:5]) print(f'{int(error_45*100)}% of predictions are correct within 45°') error_90 = np.sum(avg_occuring_errors[0:10]) print(f'{int(error_90*100)}% of predictions are correct within 90°') #HARDEST TO PREDICT x_as = np.array(range(40)) plt.bar(x_as,avg_hardest_to_predict) plt.title('reverse model: hardest directions to predict, unseen data') plt.ylabel('%') plt.ylim([0,0.05]) plt.xlabel('angle [°]') plt.xticks([0,5,10,15,20,25,30,35,40], [ 0,45,90,135,180,225,270,315,360]) save_fig_file_path = 'D:/Users/MC/Documents/UNI/MASTER/thesis/SCRIPTURE_FIGURES/H6/rev_model_hardest_dir' # plt.savefig(f'{save_fig_file_path}.png') plt.show(); #CONFUSION CORRESPONDING TO HARDEST DIRECTION x_as = np.array(range(40)) plt.title(f'reverse model: confusion for hardest direction to predict ({hardest_direction*9}°), unseen data') plt.ylabel('%') plt.xlabel('angle [°]') plt.xticks([0,5,10,15,20,25,30,35,40], [ 0,45,90,135,180,225,270,315,360]) plt.bar(x_as,avg_hardest_direction_confusion); #CONFUSION MATRIX df_cm = pd.DataFrame(confusion_array, range(40), range(40)) norm_cm = df_cm.astype('float') / df_cm.sum(axis=1)[:, np.newaxis] df_cm = norm_cm plt.figure(figsize=(22,18),dpi=120) sn.set(font_scale=2) # for label size sn.heatmap(df_cm,vmin=0,vmax=1) # font size plt.yticks([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20,22,24,26,28,30,32,34,36,38], [ 0, 18, 36, 54, 72, 90, 108, 126, 144, 162, 180,198,216,234,252,270,288,306,324,342]) plt.xlabel('predicted angle[°]') plt.xticks([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20,22,24,26,28,30,32,34,36,38], [ 0, 18, 36, 54, 72, 90, 108, 126, 144, 162, 180,198,216,234,252,270,288,306,324,342]) plt.ylabel('actual angle[°]') plt.title('reverse model: normalized confusion matrix',fontsize=40) save_fig_file_path = 'D:/Users/MC/Documents/UNI/MASTER/thesis/SCRIPTURE_FIGURES/H6/rev_model_confusion' # plt.savefig(f'{save_fig_file_path}.png') plt.show() sn.set(font_scale=1) #CONFUSION MATRIX df_cm = pd.DataFrame(tolerated_confusion_array, range(40), range(40)) norm_cm = df_cm.astype('float') / df_cm.sum(axis=1)[:, np.newaxis] df_cm = norm_cm plt.figure(figsize=(22,18),dpi=120) sn.set(font_scale=2) # for label size sn.heatmap(df_cm,vmin=0,vmax=1) # font size plt.yticks([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20,22,24,26,28,30,32,34,36,38], [ 0, 18, 36, 54, 72, 90, 108, 126, 144, 162, 180,198,216,234,252,270,288,306,234,342]) plt.xlabel('predicted angle[°]') plt.xticks([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20,22,24,26,28,30,32,34,36,38], [ 0, 18, 36, 54, 72, 90, 108, 126, 144, 162, 180,198,216,234,252,270,288,306,234,342]) plt.ylabel('actual angle[°]') plt.title(f'reverse model: normalized confusion matrix with toleration of {tolerated_error_d}',fontsize=40) save_fig_file_path = 'D:/Users/MC/Documents/UNI/MASTER/thesis/SCRIPTURE_FIGURES/H6/rev_model_confusion_tol' # plt.savefig(f'{save_fig_file_path}.png') plt.show() sn.set(font_scale=1) #RANDOMLY SELECT 1 INDEX AND COMPARE THE LABEL VS THE PREDICTION index = randrange(0,X.shape[0]) label = y[index] print("label:") print(label) print("predicted:") print(np.argmax(predictions[index])) #linear bar plot plt.bar(np.arange(len(predictions[index,:])),predictions[index,:], align='center', alpha=1) labels = np.zeros((40,)) labels[label] = np.amax(predictions[index]) plt.bar(np.arange(len(predictions[index,:])),labels[:], align='center', alpha=1) plt.ylabel('%') plt.xlabel('label') plt.title('direction') plt.show() #polar bar plot N = 40 theta = np.linspace(0.0, 2 * np.pi, N, endpoint=False) width = np.pi / 40 ax = plt.subplot(111, projection='polar') ax.bar(theta, predictions[index,:], width=width, color='b', bottom=0.0, alpha=1) ax.bar(theta, labels[:], width=width, color='g', bottom=0.0, alpha=0.5) r_max = np.amax(predictions[index]) r = np.linspace(0.1*r_max, 0.8*r_max, 3) r = np.round(r,2) ax.set_rticks(r) plt.tight_layout() plt.show() #RANDOMLY SELECT A HUNDRED SAMPLES AND PLOT THOSE WHO ARE OF BY MORE THAN 45° AND SAVE THOSE save_fig_location = 'D:/Users/MC/Documents/UNI/MASTER/thesis/SCRIPTURE_FIGURES/H6/misclassifications' counter = 0 randomIndexes = random.sample(range(0,X.shape[0]),100) allErrors = [] for index in randomIndexes: label = y[index] predicted = np.argmax(predictions[index]) output = f'label: {label} \t predict: {predicted}' error = absolute_diff(predicted,label) error = absolute_diff(predicted,label) if error != 0: output += f'\t error: {error}' allErrors.append(error) print(output) if error >5: labels = np.zeros((40,)) labels[label] = np.amax(predictions[index]) ax = plt.subplot(111, projection='polar') ax.bar(theta, predictions[index,:], width=width, color='b', bottom=0.0, alpha=1) ax.bar(theta, labels[:], width=width, color='g', bottom=0.0, alpha=0.5) r_max = np.amax(predictions[index]) r = np.linspace(0.1*r_max, 0.8*r_max, 3) r = np.round(r,2) ax.set_rticks(r) plt.tight_layout() # plt.savefig(f'{save_fig_location}/{fileNames.iloc[index]}.png') plt.show() print(fileNames.iloc[index]) print() counter += 1 print(f'{counter} of {len(randomIndexes)} were off by more than 45°') allErrors = np.array(allErrors) m_mean = np.round(np.mean(allErrors)) m_max = np.amax(allErrors) print(f'average error is {m_mean} or {m_mean*9}°') print(f'max error is {m_max} or {m_max*9}°')
label: 16 predict: 24 error: 8
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
RANDOM TESTING
#types of errors #iterate direction per direction and see what types of errors occur index = 0 while y[index] == 0: ax = plt.subplot(111, projection='polar') ax.bar(theta, predictions[index,:], width=width, color='b', bottom=0.0, alpha=1) plt.show() index += 1 dirrection = 90 df = pd.DataFrame(data) sub_data = df.loc[df['label'] == dirrection] sub_data.head() #making labels labels_list = sub_data.iloc[:, -1] # labels_list = reduce_Resolution(data) encoder = LabelEncoder() sub_y = encoder.fit_transform(labels_list) print(sub_y) print(sub_y.shape) # normalizing scaler = StandardScaler() sub_X = scaler.fit_transform(np.array(sub_data.iloc[:, :-1], dtype = float)) #make prediction for each sample in X and evaluate entire model to get an idea of accuracy sub_predictions = model.predict(X) #randomly select a hundred samples and plot those who are of by more than 45° and save those save_fig_location = 'D:/Users/MC/Documents/UNI/MASTER/thesis/SCRIPTURE_FIGURES/reverse_Mel_scale/unseen' counter = 0 randomIndexes = random.sample(range(0,sub_X.shape[0]),100) allErrors = [] for index in randomIndexes: label = sub_y[index] predicted = np.argmax(sub_predictions[index]) output = f'label: {label} \t predict: {predicted}' error = absolute_diff(predicted,label) error = absolute_diff(predicted,label) if error != 0: output += f'\t error: {error}' allErrors.append(error) print(output) if error >5: smart_pred = smart_prediction(predictions[index]) output += f'\t smart_predict: {smart_pred}' smart_error = absolute_diff(smart_pred,label) output += f'\t smart_error: {smart_error}' print(output) labels = np.zeros((40,)) labels[label] = np.amax(predictions[index]) ax = plt.subplot(111, projection='polar') ax.bar(theta, predictions[index,:], width=width, color='b', bottom=0.0, alpha=1) ax.bar(theta, labels[:], width=width, color='g', bottom=0.0, alpha=0.5) # plt.savefig(f'{save_fig_location}/{fileNames.iloc[index]}.png') plt.show() print(fileNames.iloc[index]) print() counter += 1 print(f'{counter} of {len(randomIndexes)} were off by more than 45°') allErrors = np.array(allErrors) m_mean = np.round(np.mean(allErrors)) m_max = np.amax(allErrors) print(f'average error is {m_mean} or {m_mean*9}°') print(f'max error is {m_max} or {m_max*9}°') og_true = np.copy(y) og_predictions = final_predictions new_true = np.zeros(og_true.shape[0]) new_predictions = np.zeros(og_predictions.shape[0]) for i in range(og_predictions.shape[0]): if og_predictions[i] & 0x1: #odd new_predictions[i] = int(og_predictions[i]-1) else : #even new_predictions[i] = int(og_predictions[i]) if og_true[i] & 0x1: #odd new_true[i] = int(og_true[i]-1) else : #even new_true[i] = int(og_true[i]) red_confusion_array = confusion_matrix(new_true,new_predictions) #true,#predicted red_confusion_array.shape #CONFUSION MATRIX df_cm = pd.DataFrame(red_confusion_array, range(20), range(20)) norm_cm = df_cm.astype('float') / df_cm.sum(axis=1)[:, np.newaxis] df_cm = norm_cm plt.figure(figsize=(22,18),dpi=120) sn.set(font_scale=2) # for label size sn.heatmap(df_cm,vmin=0,vmax=1) # font size plt.yticks([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18,], [ 0, 36, 72, 108, 144, 180,216,252,288,324]) plt.xlabel('predicted angle[°]') plt.xticks([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18,], [ 0, 36, 72, 108, 144, 180,216,252,288,324]) plt.ylabel('actual angle[°]') plt.title(f'reverse model: normalized confusion matrix ',fontsize=40) save_fig_file_path = 'D:/Users/MC/Documents/UNI/MASTER/thesis/SCRIPTURE_FIGURES/H6/rev_model_confusion_reduced' plt.savefig(f'{save_fig_file_path}.png') plt.show() sn.set(font_scale=1)
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
ALL MISCLASSIFICATIONS
error_5 = 0 error_6 = 0 error_7 = 0 for i in range(len(indexes_of_misclassifications)): if 0 <= indexes_of_misclassifications[i] < L5: error_5 += 1 elif L5 <= indexes_of_misclassifications[i] < L5 + L6: error_6 += 1 elif L5 + L6 <= indexes_of_misclassifications[i] < L5 + L6 + L7: error_7 += 1 print('errors per dataset are..') print(f'dataset5 has {error_5} total errors which is {int(100*error_5/L5)}% of this dataset') print(f'dataset6 has {error_6} total errors which is {int(100*error_6/L6)}% of this dataset') print() print('overall picutre is ..') print(f'dataset5 accounts for {int(100*error_5/len(indexes_of_misclassifications))}% of total errors') print(f'dataset6 accounts for {int(100*error_6/len(indexes_of_misclassifications))}% of total errors') print(f'dataset7 accounts for {int(100*error_7/len(indexes_of_misclassifications))}% of total errors') print() print('LATEX:') print(f'dataset3 & speech & {error_5} & {int(100*error_5/L5)}\% & {int(100*error_5/len(indexes_of_misclassifications))}\% \\\\') print(f'dataset4 & speech & {error_7} & {int(100*error_7/L7)}\% & {int(100*error_7/len(indexes_of_misclassifications))}\% \\\\') print(f'dataset5 & music & {error_6} & {int(100*error_6/L6)}\% & {int(100*error_6/len(indexes_of_misclassifications))}\% \\\\')
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
GRAVE MISCLASSIFICATIONS i.e. error > 45°
error_5_G = 0 error_6_G = 0 error_7_G = 0 for i in range(len(indexes_of_grave_misclassifications)): if 0 <= indexes_of_grave_misclassifications[i] < L5: error_5_G += 1 elif L5 <= indexes_of_grave_misclassifications[i] < L5 + L6: error_6_G += 1 elif L5 + L6 <= indexes_of_grave_misclassifications[i] < L5 + L6 + L7: error_7_G += 1 print('errors per dataset are..') print(f'dataset5 has {error_5_G} total errors which is {int(100*error_5_G/L5)}% of this dataset') print(f'dataset6 has {error_6_G} total errors which is {int(100*error_6_G/L6)}% of this dataset') print(f'dataset7 has {error_7_G} total errors which is {int(100*error_7_G/L7)}% of this dataset') print() print('overall picutre is ..') print(f'dataset5 accounts for {int(100*error_5_G/len(indexes_of_grave_misclassifications))}% of total errors') print(f'dataset6 accounts for {int(100*error_6_G/len(indexes_of_grave_misclassifications))}% of total errors') print(f'dataset7 accounts for {int(100*error_7_G/len(indexes_of_grave_misclassifications))}% of total errors') print() print('LATEX:') print(f'dataset3 & speech & {error_5_G} & {int(100*error_5_G/L5)}\% & {int(100*error_5_G/len(indexes_of_grave_misclassifications))}\% & {int(100*error_5_G/len(indexes_of_misclassifications))}\% \\\\') print(f'dataset4 & speech & {error_7_G} & {int(100*error_7_G/L7)}\% & {int(100*error_7_G/len(indexes_of_grave_misclassifications))}\% & {int(100*error_7_G/len(indexes_of_misclassifications))}\% \\\\') print(f'dataset5 & music & {error_6_G} & {int(100*error_6_G/L6)}\% & {int(100*error_6_G/len(indexes_of_grave_misclassifications))}\% & {int(100*error_6_G/len(indexes_of_misclassifications))}\% \\\\')
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
ERROR PER DATASET
x_as = np.array(range(21)) plt.bar(x_as,avg_errors_5) plt.title('reverse model: average error occurrence on unseen data5') plt.ylabel('%') plt.ylim([0,1]) plt.xlabel('error [°]') plt.xticks([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20], [ 0, 18, 36, 54, 72, 90, 108, 126, 144, 162, 180]) save_fig_file_path = 'D:/Users/MC/Documents/UNI/MASTER/thesis/SCRIPTURE_FIGURES/H6/rev_model_errors5_error_unseen_data' # plt.savefig(f'{save_fig_file_path}.png') plt.show(); error_27 = np.sum(avg_errors_5[0:3]) print(f'{int(error_27*100)}% of predictions are correct within 27°') error_45 = np.sum(avg_errors_5[0:5]) print(f'{int(error_45*100)}% of predictions are correct within 45°') error_90 = np.sum(avg_errors_5[0:10]) print(f'{int(error_90*100)}% of predictions are correct within 90°') x_as = np.array(range(21)) plt.bar(x_as,avg_errors_6) plt.title('reverse model: average error occurrence on unseen data6') plt.ylabel('%') plt.ylim([0,1]) plt.xlabel('error [°]') plt.xticks([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20], [ 0, 18, 36, 54, 72, 90, 108, 126, 144, 162, 180]) save_fig_file_path = 'D:/Users/MC/Documents/UNI/MASTER/thesis/SCRIPTURE_FIGURES/H6/rev_model_errors6_error_unseen_data' # plt.savefig(f'{save_fig_file_path}.png') plt.show(); error_27 = np.sum(avg_errors_6[0:3]) print(f'{int(error_27*100)}% of predictions are correct within 27°') error_45 = np.sum(avg_errors_6[0:5]) print(f'{int(error_45*100)}% of predictions are correct within 45°') error_90 = np.sum(avg_errors_6[0:10]) print(f'{int(error_90*100)}% of predictions are correct within 90°') x_as = np.array(range(21)) plt.bar(x_as,avg_errors_7) plt.title('reverse model: average error occurrence on unseen data7') plt.ylabel('%') plt.ylim([0,1]) plt.xlabel('error [°]') plt.xticks([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20], [ 0, 18, 36, 54, 72, 90, 108, 126, 144, 162, 180]) save_fig_file_path = 'D:/Users/MC/Documents/UNI/MASTER/thesis/SCRIPTURE_FIGURES/H6/rev_model_errors5_error_unseen_data' # plt.savefig(f'{save_fig_file_path}.png') plt.show(); error_27 = np.sum(avg_errors_7[0:3]) print(f'{int(error_27*100)}% of predictions are correct within 27°') error_45 = np.sum(avg_errors_7[0:5]) print(f'{int(error_45*100)}% of predictions are correct within 45°') error_90 = np.sum(avg_errors_7[0:10]) print(f'{int(error_90*100)}% of predictions are correct within 90°') df = pd.DataFrame({'dataset3':avg_errors_5, 'dataset4':avg_errors_6, 'dataset5':avg_errors_7}) df.plot(kind='bar', stacked=True) plt.title('distribution of errors between datasets') plt.ylabel('%') # plt.ylim([0,0.5]) plt.xlabel('error [°]') plt.xticks([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20], [ 0, 18, 36, 54, 72, 90, 108, 126, 144, 162, 180]) save_fig_file_path = 'D:/Users/MC/Documents/UNI/MASTER/thesis/SCRIPTURE_FIGURES/H6/error_distribtuion_between_datasets' # plt.savefig(f'{save_fig_file_path}.png') x = np.array(range(21)) width = 0.25 ax = plt.subplots(111) rects1 = ax.bar(x - width/3, avg_errors_5, width, label='avg_errors_5') rects2 = ax.bar(x + width, avg_errors_6, width, label='avg_errors_6') rects3 = ax.bar(x + width/3, avg_errors_7, width, label='avg_errors_7') ax.set_ylabel('Scores') ax.set_title('Scores by group and gender') ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend() # ax.xticks([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20], [ 0, 18, 36, 54, 72, 90, 108, 126, 144, 162, 180]) fig.tight_layout() x_as = np.array(range(21)) plt.bar(x_as,AVG_errors_7) plt.title('reverse model: average error occurrence on unseen data') plt.ylabel('%') plt.ylim([0,1]) plt.xlabel('error [°]') plt.xticks([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20], [ 0, 18, 36, 54, 72, 90, 108, 126, 144, 162, 180]) save_fig_file_path = 'D:/Users/MC/Documents/UNI/MASTER/thesis/SCRIPTURE_FIGURES/H6/rev_modeldumrror_unseen_data' # plt.savefig(f'{save_fig_file_path}.png') plt.show();
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
ATTEMPT TO PLOT RADAR CHART
avg_correct = all_correct/sum_correct x_as = np.array(range(40)) avg_correct = all_correct/sum_correct x_as = np.array(range(40)) N = 40 theta = np.linspace(0.0, 2 * np.pi, N, endpoint=True) width = np.pi / 40 fig= plt.figure(dpi=120) ax = fig.add_subplot(111, polar=True) ax.plot(theta, avg_correct, '-', linewidth=2) ax.fill(theta, avg_correct, alpha=0.25) # ax.set_thetagrids(angles * 180/np.pi, labels) plt.yticks([]) ax.set_title("distribution of correctly predicted directions", y=1.1) ax.grid(True) fig.tight_layout() save_fig_file_path = 'D:/Users/MC/Documents/UNI/MASTER/thesis/SCRIPTURE_FIGURES/H6/correctness_distribution' # plt.savefig(f'{save_fig_file_path}.png') x_as = np.array(range(40)) plt.bar(x_as,all_correct) plt.show() N = 40 theta = np.linspace(0.0, 2 * np.pi, N, endpoint=True) width = np.pi / 40 fig= plt.figure(dpi=120) ax = fig.add_subplot(111, polar=True) ax.plot(theta, all_correct, '-', linewidth=2) ax.fill(theta, all_correct, alpha=0.25) # ax.set_thetagrids(angles * 180/np.pi, labels) plt.yticks([]) ax.set_title("distribution of correctly predicted directions", y=1.1) ax.grid(True) fig.tight_layout() plt.show diagonal = np.diagonal(confusion_array) fig= plt.figure(dpi=120) ax = fig.add_subplot(111, polar=True) ax.plot(theta, diagonal, '-', linewidth=2) ax.fill(theta, diagonal, alpha=0.25) # ax.set_thetagrids(angles * 180/np.pi, labels) plt.yticks([]) ax.set_title("distribution of correctly predicted directions", y=1.1) ax.grid(True) fig.tight_layout() diagonal-all_correct
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
WHAT FILES CAUSE ERRORS
libri_error = 0 gov_error = 0 for i in range(len(indexes_of_misclassifications)): if 0 <= indexes_of_misclassifications[i] < L5: 0+0 elif L5 <= indexes_of_misclassifications[i] < L5 + L6: 0+0 elif L5 + L6 <= indexes_of_misclassifications[i] < L5 + L6 + L7: if 'us-gov' in misclassifications[i]: gov_error += 1 else : libri_error +=1 print(f'total librispeech errors are {libri_error} which is {int(100*libri_error/L7)}\% of dataset4') print(f'total us-gov errors are {gov_error} which is {int(100*gov_error/L7)}\% of dataset4')
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
WHAT IR LENGTHS CAUSE ERRORS
L_10_error = 0 L_50_error = 0 L_100_error = 0 L_500_error = 0 L_1000_error = 0 for i in range(len(indexes_of_misclassifications)): if 0 <= indexes_of_misclassifications[i] < L_10: L_10_error += 1 elif L_10 <= indexes_of_misclassifications[i] < L_10 + L_50: L_50_error += 1 elif L_10 + L_50 <= indexes_of_misclassifications[i] < L_10 + L_50 + L_100: L_100_error += 1 elif L_10 + L_50 + L_100 <= indexes_of_misclassifications[i] < L_10 + L_50 + L_100 + L_500: L_500_error += 1 elif L_10 + L_50 + L_100 + L_500 <= indexes_of_misclassifications[i] < L_10 + L_50 + L_100 + L_500 + L_1000: L_1000_error += 1 print('LATEX:') print(f'IR_10ms & {L_10_error} & {int(100*L_10_error/L_10)}\% & {int(100*L_10_error/len(indexes_of_misclassifications))}\% \\\\') print(f'IR_50ms & {L_50_error} & {int(100*L_50_error/L_10)}\% & {int(100*L_50_error/len(indexes_of_misclassifications))}\% \\\\') print(f'IR_100ms & {L_100_error} & {int(100*L_100_error/L_10)}\% & {int(100*L_100_error/len(indexes_of_misclassifications))}\% \\\\') print(f'IR_500ms & {L_500_error} & {int(100*L_500_error/L_10)}\% & {int(100*L_500_error/len(indexes_of_misclassifications))}\% \\\\') print(f'IR_1000ms & {L_1000_error} & {int(100*L_1000_error/L_10)}\% & {int(100*L_1000_error/len(indexes_of_misclassifications))}\% \\\\') #DELETE US_GOV FILES # df = pd.DataFrame(data) # df = df[~df.filename.str.contains('us-gov')] # data = df # print('random selection of rows:') # data_subset = data.sample(n=5) # data_subset.head()
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
TESTS ON DIFF IR LENGTHS
x_as = np.array(range(21)) plt.bar(x_as,avg_errors_10) plt.ylim([0,1]); x_as = np.array(range(21)) plt.bar(x_as,avg_errors_50) plt.ylim([0,1]); x_as = np.array(range(21)) plt.bar(x_as,avg_errors_100) plt.ylim([0,1]); x_as = np.array(range(21)) plt.bar(x_as,avg_errors_500) plt.ylim([0,1]); x_as = np.array(range(21)) plt.bar(x_as,avg_errors_1000) plt.ylim([0,1]);
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL