licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MPL-2.0"
] | 0.2.5 | 3a6ed6c55afc461d479b8ef5114cf6733f2cef56 | docs | 75 | # Two-point correlators
```@autodocs
Modules = [ElectronGas.TwoPoint]
```
| ElectronGas | https://github.com/numericalEFT/ElectronGas.jl.git |
|
[
"MPL-2.0"
] | 0.2.5 | 3a6ed6c55afc461d479b8ef5114cf6733f2cef56 | docs | 2005 | # Analytic Expression of $G_{+}$ extracted from DMC
## Exchange-correlation Energy
The analytic expression of local field factor is based on the parametrization of correlation energy $\epsilon_c(r_s)$ in Vosko's paper doi: 10.1139/p80-159, equation (4.4):
```math
\epsilon_c(r_s) = A\{n\frac{x^2}{X(x)} + \frac{2b}{Q}tan^{-1}\frac{Q}{2x+b}-\frac{bx_0}{X(x_0)}[ln\frac{(x-x_0)^2}{X(x)}+\frac{2(b+2x_0)}{Q}tan^{-1}\frac{Q}{2x+b}] \}Ry
```
Where $x_0$, $b$, and $c$ are free parameters obtained by fitting with numerical data, and $x=r_s^{1/2}$, $Q = (4c - b )^{1/2}$, $X(x) = x^2 + bx +c$.
The parametrization we use for paramagnetic situation are $A = 0.0621814$, $x_0= -0.10498$, $b = 3.72744$, $c = 12.9352$, in table 5 of Vosko's paper.
Note that this parametrization is under the Rydberg energy unit.
## Analytic Expression of $G_{+}$
In 1995 Moroni and others used Diffusion Monte Carlo to produce the local field factor $G_{+}$ (doi: 10.1103/PhysRevLett.75.689). Later in 1998 Corradini and others formulated an analytical expression based on Moroni's numerical data (doi: 10.1103/PhysRevB.57.14569).
The expression is based on three parameters:
```math
\begin{align}
&A =\frac{1}{4}-\frac{k_F^2}{4 \pi e^2}\frac{d \mu_c}{dn_0}\\
&B = \frac{(1+a_1x+a_2 x^2)}{(3+b_1 x +b_2 x^2)}\\
&C =\frac{\pi}{2e^2k_F}\frac{-d(r_s\epsilon_c)}{dr_s}
\end{align}
```
Here $x=r_s^{1/2}$, $n_0 = 3/(4\pi a_0^3 r_s^3)$ is the density of homogeneous electron gas at $r_s$, and $\mu_c$ is the contribution of correlation $\epsilon_c$ to the chemical potential:
```math
\mu_c = \frac{d(n_0 \epsilon_c)}{dn_0}
```
The parameters are $a_1 = 2.15, a_2 = 0.435, b_1=1.57, b_2=0.409$, valid for $r_s$ in the range 2-10.
With all parameters ready, the final expression of $G_{+}$ is:
```math
G_{+}(q)=CQ^2+\frac{BQ^2}{g+Q^2}+\alpha Q^4 e^{-\beta Q^2}
```
where
```math
\begin{align}
&g = \frac{B}{A-C}\\
&\alpha = \frac{1.5}{r_s^{1/4}}\frac{A}{Bg}\\
&\beta = \frac{1.2}{Bg}
\end{align}
```
And $Q = q/k_F$.
| ElectronGas | https://github.com/numericalEFT/ElectronGas.jl.git |
|
[
"MPL-2.0"
] | 0.2.5 | 3a6ed6c55afc461d479b8ef5114cf6733f2cef56 | docs | 4659 | # Fock diagram of free electrons
**Tags**: #many-electron-problem, #UEG, #feynman-diagram
## 0.1.1 Bare electron & Yukawa/Coulomb interaction
- The bare electron dispersion $\epsilon_k=k^2/2m$
- Independent of spin factor.
### 0.1.1.1 3D Electron Gas
- Interaction:
```math
v(r)=\frac{e^2 \exp(-\lambda r)}{r}\rightarrow v_q=\frac{4\pi e^2}{q^2+\lambda^2}
```
-
```math
\Sigma_x(k) = -\int \frac{d^3q}{(2\pi)^3} n_q v_{k-q}=-\int_0^{\infty} \frac{ 2\pi n_q q^2 dq}{8\pi^3}\int_{-1}^1 dx \frac{4\pi e^2}{(k^2+q^2+2 k q x)+\lambda^2}
```
```math
=-\frac{e^2}{\pi}\int_0^{\infty} n_q q^2 dq\int_{-1}^1\frac{dx}{k^2+q^2+\lambda^2+2 k q\,x}
```
- At $T=0$, integrate $x$ first, we obtain
```math
\Sigma_x(k)=\frac{e^2}{2\pi\,k}\int_0^{\infty} dq n_q q \ln\left(\frac{\lambda^2+(k-q)^2}{\lambda^2+(k+q)^2}\right)
```
- Next we introduce new variables $x=k/\lambda$ and $y=q/\lambda$ and $x_F=k_F/\lambda$ to obtain
```math
\Sigma_x(k=\lambda x)=-\frac{\lambda e^2}{2\pi x}\int_0^{x_F} dy y \ln\left(\frac{1+(x+y)^2}{1+(x-y)^2}\right)
```
```math
=-\frac{\lambda e^2}{2\pi}
\left[
2 x_F + 2\arctan(x-x_F)-2\arctan(x+x_F)-\frac{1-x^2+x_F^2}{2 x}\ln\left(\frac{1+(x-x_F)^2}{1+(x+x_F)^2}\right)
\right]
```
- We conclude
```math
\Sigma_x(k)=-\frac{e^2 k_F}{\pi}
\left[
1 + \frac{\lambda}{k_F} \arctan(\frac{k-k_F}{\lambda})-\frac{\lambda}{k_F} \arctan(\frac{k+k_F}{\lambda})
-\frac{(\lambda^2-k^2+k_F^2)}{4 k\, k_F}
\ln\left(\frac{\lambda^2+(k-k_F)^2}{\lambda^2+(k+k_F)^2}\right)
\right]
```
- For Coulomb interaction $\lambda=0$,
```math
\Sigma_x(k)=-\frac{e^2 k_F}{\pi}
\left[
1 +\frac{(k^2-k_F^2)}{2 k\, k_F}
\ln\left|\frac{k-k_F}{k+k_F}\right|
\right]
```

The effective mass diverges at the Fermi surface:
```math
\frac{m}{m^*}=1+m\frac{\partial^2 \Sigma_x (k)}{\partial k^2}=1-\frac{e^2 m}{2\pi k^2}\left[2+\frac{k^2+k_F^2}{k}\ln\left|\frac{k-k_F}{k+k_F}\right|\right]
```
### 0.1.1.2 2D Electron Gas
- Interaction (**2D Yukawa interaction**):
```math
v_q=\frac{4\pi e^2}{q^2+\lambda^2}
```
- The Fock diagram,
```math
\Sigma_x(k) = -\int \frac{d^2q}{(2\pi)^2} n_q v_{k-q}=-\int_0^{\infty}\frac{ n_q q dq}{4\pi^2}\int_{0}^{2\pi} d\theta \frac{4\pi e^2 }{(k^2+q^2+2 k q \cos \theta)+\lambda^2}
```
```math
=-\frac{e^2}{\pi}\int_0^{\infty} n_q q dq \int_{0}^{2\pi}\frac{d\theta}{k^2+q^2+\lambda^2+2 k q\,\cos \theta}
```
- At $T=0$, use the integral $\int_0^{2\pi} \frac{d\theta}{a+\cos\theta}=\frac{2\pi}{\sqrt{a^2-1}}$ for $a>1$,
```math
\Sigma_x(k)=-e^2\int_0^{k_F} \frac{dq^2}{\sqrt{(k^2+q^2+\lambda^2)^2-4 k^2 q^2}}=-e^2\int_{\lambda^2-k^2}^{k_F^2+\lambda^2-k^2} \frac{dx}{\sqrt{x^2+4 k^2 q^2}}
```
- Use the integral $\int \frac{dx}{\sqrt{x^2+1}}=\ln (\sqrt{x^2+1}+x)+\text{Const}=\text{arcsinh}(x)+\text{Const}$
- For Yukawa interaction $\lambda>0$,
```math
\Sigma_x(k)=-e^2 \ln \frac{\sqrt{(k_F^2+\lambda^2-k^2)^2+4k^2\lambda^2}+k_F^2+\lambda^2-k^2}{\sqrt{(\lambda^2-k^2)^2+4k^2\lambda^2}+\lambda^2-k^2}
```
- Or equivalently,
```math
\Sigma_x(k)=-e^2 \ln \frac{\sqrt{(k_F^2+\lambda^2-k^2)^2+4k^2\lambda^2}+k_F^2+\lambda^2-k^2}{2\lambda^2}
```
!!! warning
For the Coulomb interaction $\lambda=0$, the integral diverges. The problem is not well-defined.
- Interaction (**3D Yukawa interaction in the plane**):
```math
v(r)=\frac{e^2 \exp(-\lambda r)}{r}\rightarrow v_q=\frac{2\pi e^2}{\sqrt{q^2+\lambda^2}}
```
- The Fock diagram,
```math
\begin{aligned}
\Sigma_x(k) = -\int \frac{d^2q}{(2\pi)^2} n_q v_{k-q} &=-\int_0^{\infty}\frac{ n_q q dq}{4\pi^2}\int_{0}^{2\pi} d\theta \frac{2\pi e^2 }{\sqrt{k^2+q^2+2 k q \cos \theta+\lambda^2}} \\
&=-\frac{e^2}{2\pi k}\int_0^{\infty} n_q dq \left[\sqrt{(k+q)^2+\lambda^2} - \sqrt{(k-q)^2+\lambda^2} \right] .
\end{aligned}
```
- At $T=0$,
```math
\begin{aligned}
\Sigma_x(k) &= -\frac{e^2}{2\pi k}\int_0^{k_F} dq \left[\sqrt{(k+q)^2+\lambda^2} - \sqrt{(k-q)^2+\lambda^2} \right] \\
&=-\frac{e^2}{4\pi k} \left\{(k-k_F)\sqrt{(k-k_F)^2+\lambda^2}+ (k+k_F)\sqrt{(k+k_F)^2+\lambda^2} -2k\sqrt{k^2+\lambda^2} +\lambda^2 \ln \frac{\left[k-k_F+\sqrt{(k-k_F)^2+\lambda^2}\right] \left[k+k_F+\sqrt{(k+k_F)^2+\lambda^2}\right]}{ (k+\sqrt{k^2+\lambda^2})^2} \right\} .
\end{aligned}
```
- For Coulomb interaction ``\lambda=0``,
```math
\Sigma_x(k)= -\frac{e^2}{4\pi k} \left[(k-k_F)|k-k_F| +(k+k_F)|k+k_F| - 2k|k| \right] .
```
- For ``k\to 0``
```math
\Sigma_x(k=0)= -\frac{e^2}{\pi} \left(\sqrt{k_F^2+\lambda^2}-\lambda \right) .
```
**Reference**:
1. [GW for electron gas](http://hauleweb.rutgers.edu/tutorials/files_FAQ/GWelectronGas.html)
2. Mahan, Gerald D. Many-particle physics. Springer Science & Business Media, 2013. Chapter 5.1
| ElectronGas | https://github.com/numericalEFT/ElectronGas.jl.git |
|
[
"MPL-2.0"
] | 0.2.5 | 3a6ed6c55afc461d479b8ef5114cf6733f2cef56 | docs | 4246 | # Ladder (Particle-particle bubble) of free electrons
```math
\begin{split}
&\int \frac{d^3 \vec{p}}{\left(2\pi^3\right)} T \sum_{i \omega_n} \frac{1}{i \omega_n+i \Omega_n-\frac{(\vec{k}+\vec{p})^2}{2 m}+\mu} \frac{1}{-i \omega_n-\frac{p^2}{2 m}+\mu} \\
=&\int \frac{d^3 \vec{p}}{\left(2\pi^3\right)} \frac{f\left(\frac{(\vec{k}+\vec{p})^2}{2 m}-\mu\right)-f\left(-\frac{p^2}{2 m}+\mu\right)}{i \Omega_n-\frac{(k+\vec{p})^2}{2 m}-\frac{p^2}{2 m}+2 \mu}
\end{split}
```
- In the case ``k>2k_F``
Define ``\vec{k}+\vec{p}=-\vec{p}' `` for the first term, then rename it as ``\vec{p}``,
```math
\begin{split}
=& \int \frac{d^3 \vec{p}}{\left(2\pi^3\right)} \frac{f\left(\frac{p^2}{2 m}-\mu\right)-f\left(-\frac{p^2}{2 m}+\mu\right)}{i \Omega_n-\frac{p^2}{2 m}-\frac{(\vec{p}+\vec{k})^2}{2 m}+2 \mu} \\
=& \int \frac{d^3 \vec{p}}{\left(2\pi^3\right)} \frac{2f\left(\frac{p^2}{2 m}-\mu\right)-1}{i \Omega_n-\frac{p^2}{2 m}-\frac{(\vec{p}+\vec{k})^2}{2 m}+2 \mu} \\
=& -\int \frac{d^3 \vec{p}}{\left(2\pi^3\right)} \frac{1}{i \Omega_n-\frac{p^2}{2 m}-\frac{(\vec{p}+\vec{k})^2}{2 m}+2 \mu} +\int \frac{d^3 \vec{p}}{(2 \pi)^3} \frac{2f\left(\frac{p^2}{2 m}-\mu\right)}{i \Omega_n-\frac{p^2}{2 m}-\frac{\left(\vec{p}+\vec{k}\right)^2}{2 m}+2 \mu}
\end{split}
```
Define ``\vec{p}'=\vec{p}+\vec{k}/2``, the first term becomes
```math
\frac{1}{i \Omega_n-\frac{\left(\vec{p}’+\vec{k}/2\right)^2}{2 m}-\frac{\left(\vec{p}^{\prime}-\vec{k}/{2}\right)^2}{2 m}+2\mu}=\frac{1}{i \Omega_n-\frac{\vec{p}^{\prime 2}}{m}-\frac{k^2}{4 m}+2 \mu}
```
```math
\begin{split}
=& -\int \frac{d^3 \vec{p}}{\left(2\pi^3\right)} \frac{1}{i \Omega_n-\frac{p^2}{ m}-\frac{\vec{k}^2}{4 m}+2 \mu}\\
=&4 \pi \int \frac{d p}{(2 \pi)^3} \frac{p^2}{\frac{p^2}{m}+\frac{k^2}{4 m}-i \Omega_n-2 \mu} \\
=&4 \pi m^{3/2} \int \frac{d p/m^{1/2}}{(2 \pi)^3} \frac{p^2/m}{\frac{p^2}{m}+\frac{k^2}{4 m}-i \Omega_n-2 \mu} \\
=&4 \pi m^{3/2} \int \frac{d x}{(2 \pi)^3} \frac{x^2}{x^2+\frac{k^2}{4 m}-i \Omega_n-2 \mu}
\end{split}
```
Using ``\int \frac{x^2 d x}{x^2+a}=x-\sqrt{a} \tan ^{-1}\left(\frac{x}{\sqrt{a}}\right)``,
```math
\begin{split}
=&4 \pi m^{3/2} \int \frac{d x}{(2 \pi)^3} \frac{x^2}{x^2+\frac{k^2}{4 m}-i \Omega_n-2 \mu} \\
=& \frac{4\pi m^{3/2}}{(2\pi)^3}\frac{\Lambda}{\sqrt{m}}-\frac{4\pi m^{3/2}}{(2\pi)^3}\sqrt{\frac{k^2}{4 m}-i \Omega_n-2 \mu}\cdot \left.\tan^{-1}\left(\frac{x}{\sqrt{\frac{k^2}{4 m}-i \Omega_n-2 \mu}}\right)\right|^{\Lambda/\sqrt{m}}_0 \\
=& \frac{m}{2\pi^2}\Lambda-\frac{m^{3/2}}{4\pi}\sqrt{\frac{k^2}{4 m}-i \Omega_n-2 \mu}
\end{split}
```
We conclude
```math
\begin{split}
&\int \frac{d^3 \vec{p}}{\left(2\pi^3\right)} T \sum_{i \omega_n} \frac{1}{i \omega_n+i \Omega_n-\frac{(\vec{k}+\vec{p})^2}{2 m}+\mu} \frac{1}{-i \omega_n-\frac{p^2}{2 m}+\mu} \\
=&\frac{m \Lambda}{2\pi^2} -\frac{m^{3 / 2}}{4\pi} \sqrt{-i \Omega_n+\frac{k^2}{4 m}-2 \mu}
+\int \frac{d^3 \vec{p}}{(2 \pi)^3} \frac{2f\left(\frac{p^2}{2 m}-\mu\right)}{i \Omega_n-\frac{p^2}{2 m}-\frac{\left(\vec{p}+\vec{k}\right)^2}{2 m}+2 \mu}
\end{split}
```
- Generic ``k``
```math
\begin{split}
=& \int \frac{d^3 \vec{p}}{\left(2\pi^3\right)} \frac{2f\left(\frac{p^2}{2 m}-\mu\right)-1}{i \Omega_n-\frac{p^2}{2 m}-\frac{(\vec{p}+\vec{k})^2}{2 m}+2 \mu} \\
=& \frac{1}{(2\pi)^2} \int_0^\pi \sin(\theta)d\theta \int_0^\infty p^2 dp \frac{2f\left(\frac{p^2}{2 m}-\mu\right)-1}{i \Omega_n-\frac{p^2}{m}-\frac{k^2}{2m}-\frac{kp\cos(\theta)}{m}+2 \mu}
\end{split}
```
The angle can be integrated explicitly,
```math
-\int_0^\pi \frac{d\cos\theta}{a+b\cos\theta} = \int_{-1}^{1} \frac{dx}{a+bx} = \frac{1}{b} \ln\frac{a+b}{a-b}
```
where ``a`` is not real-valued as long as ``\Omega_n`` doesn't vanish, and ``\ln(a/b) \equiv \ln(a)-\ln(b)``.
Therefore, for generic Matsubara-frequency,
```math
= \frac{m}{(2\pi)^2} \int_0^\infty dp \left[\frac{p}{k}\ln \frac{i\Omega_n -\frac{p^2}{m}-\frac{k^2}{2m}+\frac{pk}{m}+2\mu}{i\Omega_n -\frac{p^2}{m}-\frac{k^2}{2m}-\frac{pk}{m}+2\mu}\left(2f(\frac{p^2}{2m}-\mu)-1\right)-2\right]+\frac{m\Lambda}{2\pi^2}
```
In the small momentum limit, it is simplifies to
```math
= \frac{m}{(2\pi)^2} \int_0^\infty dp \left[\frac{\frac{2p^2}{m}}{i\Omega_n -\frac{p^2}{m}+2\mu}\left(2f(\frac{p^2}{2m}-\mu)-1\right)-2\right]+\frac{m\Lambda}{2\pi^2}
``` | ElectronGas | https://github.com/numericalEFT/ElectronGas.jl.git |
|
[
"MPL-2.0"
] | 0.2.5 | 3a6ed6c55afc461d479b8ef5114cf6733f2cef56 | docs | 8465 |
# Legendre Decomposition of Interaction
## Why decompose?
In many cases we need to calculate integrals of the following form:
```math
\begin{aligned}
\Delta(\vec{k}) = \int \frac{d^dp}{(2\pi)^d} W(|\vec{k}-\vec{p}|) F(\vec{p}),
\tag{1}
\end{aligned}
```
where only momentum dependency is of our concern. In such cases we assume that the interaction depends only on the magnitude of the momentum difference, and the other part of the integrand has some sort of space symmetry.
Two typical cases are the calculation of self-energy and gap-function equation. In the case of self-energy, the other part of the integrand depends only on the magnitude of the momentum, thus we can integrate out the angular dependence to simplify the calculation. In the case of gap-function equation, we do not know priorly the symmetry of anomalous correlator, but we can always decompose the interaction and anomalous correlator into angular momentum channels. In that sense, the self-energy case is actually a special case where we only care about s-channel.
## How?
We first express the interaction in Legendre polynomials for ``d (\geq 3)`` dimensions:
```math
\begin{aligned}
W(|\vec{k}-\vec{p}|)&=\sum_{\ell=0}^{\infty}\frac{N(d,\ell)}{2} w_l(k, p) P_{l}(\hat{kp}) \,,\\
w_{\ell}(k, p) &= \int_{-1}^{1}d\chi P_l(\chi) W(\sqrt{k^2+p^2-2kp\chi}) \,, \\
\end{aligned}
```
where
```math
N(d, \ell)=\frac{2 \ell+d-2}{\ell}\left(
\begin{array}{c}
\ell+d-3 \\
\ell-1
\end{array}\right)
```
denotes the number of linearly independent Legendre polynomials of degree ``\ell`` in ``d`` dimensions (``d\geq 3``).
The Legendre polynomials of a scalar product of unit vectors can be expanded using the addition theorem for spherical harmonics
```math
P_{\ell}(\hat{k p})=\frac{\Omega_{d}}{N(d,\ell)} \sum_{m=1}^{N(d,\ell)} Y_{\ell m}(\hat{k}) Y_{\ell m}^{*}(\hat{p})\,,
```
where ``\Omega_{d}=(2\pi)^{\frac d 2}/\Gamma(\frac d 2)`` is the solid angle in ``d`` dimensions. The spherical harmonics are orthonormal as
```math
\int {\rm d}\hat k Y_{\ell m}(\hat k) Y^*_{\ell^\prime m^\prime}(\hat k) = \delta_{\ell \ell^\prime} \delta_{mm^\prime} \,.
```
Hence, the ``W(|\vec{k}-\vec{p}|)`` function can expressed further on as
```math
W(|\vec{k}-\vec{p}|)=\frac{\Omega_d}{2} \sum_{\ell m} w_l(k, p) Y_{lm}(\hat{k})Y_{lm}^{*}(\hat{p}).
```
The other part of the integrand could also be decomposed as
```math
\begin{aligned}
F(\vec p) &= \sum_{\ell m} f_{\ell m}(p) Y_{\ell m}(\hat p)\,, \\
f_{lm}(p) &= \int d\hat{k}Y^*_{lm}(\hat{p})F(\vec{p})
\end{aligned}
```
Projecting Eq.(1), on the spherical harmonic ``Y_{\ell m}(\hat k)``, we have
```math
\begin{aligned}
\sum_{\ell m} \Delta_{\ell m }(k)Y_{\ell m}(\hat k) &= \frac{\Omega_d}{2} \int \frac{{\rm d}\vec p}{(2\pi)^d} \sum_{\ell m} w_{\ell}(k,p) Y_{\ell m}(\hat{k}) Y_{\ell m}^{*}(\hat{p} ) \sum_{\ell^\prime m^\prime}f_{\ell^\prime m^\prime}(p) Y_{\ell^\prime m^\prime}(\hat p) \\
&=\frac{\Omega_d}{2} \sum_{\ell m} \int \frac{p^{d-1}dp}{(2\pi)^d} w_{\ell}(k,p,\tau) f_{\ell m}(p, \tau) Y_{\ell m}(\hat k) \,,
\end{aligned}
```
which leads to the decoupled equations with channels.
For gap-function equation, we have
```math
\begin{aligned}
\Delta_l(k) = \frac{\Omega_d}{2} \int \frac{p^{d-1} {\rm d}p}{(2\pi)^d} w_l(k, p) f_l(p) \,.
\end{aligned}
```
For ``GW`` self-energy symmertric with ``\hat k``, we have
```math
\begin{aligned}
\Sigma(k) = \frac{\Omega_d}{2} \int \frac{p^{d-1} {\rm d}p}{(2\pi)^d} w_0(k, p) G(p).
\end{aligned}
```
## Helper function
We can do the decomposition with helper function:
```math
\begin{aligned}
H_n(y) = \int_{0}^{y} z^n W(z)dz.
\end{aligned}
```
Then
```math
\begin{aligned}
\int_{|k-p|}^{k+p} z^n W(z)dz = H_n(k+p)-H_n(|k-p|)= \delta H_n(k, p).
\end{aligned}
```
Changing variables in Eq.(1) to ``z^2=k^2+p^2-2kp\chi``, we obatin
```math
w_{\ell}(k, p)=\frac{1}{k p} \int_{|k-p|}^{k+p} z d z P_{\ell}\left(\frac{k^{2}+p^{2}-z^{2}}{2 k p}\right) W(z)
```
Since ``P_{\ell}(x)`` is a polynomial in ``x``, for any ``k``,``p``, and integer ``\ell``, ``w_l`` can be expressed as the combination of helper functions.
For electron gas, the ``W`` function contains two terms, bare interaction ``V`` and generic ``W(q,\tau)``. We only tabulated helper functions for the second term. Helper functions for the first term ``h_n`` can be done analytically.
## Three dimensions
For 3D, ``N(3,\ell)=2\ell +1``, and ``Y_{\ell m}`` is the standard sphereical harmonic function. The ``GW`` self energy is
```math
\begin{aligned}
\Sigma(k) = \int \frac{p^2 {\rm d}p}{4\pi^2} w_0(k, p) G(p) \,.
\end{aligned}
```
For 3D electron gas with bare interaction ``V=\frac{4\pi e^2}{q^2} \delta(\tau)\, \left( V(r)=\frac{e^2}{r} \right)``, the helper functions for the first term have
```math
\delta h_{1}(k, p)=4 \pi e^{2} \ln \frac{k+p}{|k-p|}, \quad \delta h_{3}(k, p)=4 \pi e^{2}[2 k p], \quad \delta h_{5}(k, p)=4 \pi e^{2}\left[2 k p\left(k^{2}+p^{2}\right)\right] .
```
3D Yukawa interaction has
```math
\begin{aligned}
V(r) &= \frac{e^2}{r}e^{-mr}, \quad V(q)=\frac{4\pi e^2}{q^2+m^2} ,\\
\delta h_{1}(k, p)&=2 \pi e^{2} \ln\left[\frac{(k+p)^2+m^2}{(k-p)^2+m^2}\right], \\
\delta h_{3}(k, p)&=4 \pi e^{2} \left[2kp - \frac{m^2}{2} \ln \frac{(k+p)^2+m^2}{(k-p)^2+m^2} \right], \\
\delta h_{5}(k, p)&=4 \pi e^{2}\left[2 k p\left(k^2+p^2 -m^2\right) +\frac{m^4}{2}\ln \frac{(k+p)^2+m^2}{(k-p)^2+m^2} \right].
\end{aligned}
```
``w_l`` can be expressed as
```math
\begin{aligned}
w_0(k,p) &= \frac{1}{kp} \delta H_1(k, p), \\
w_1(k,p) &= \frac{1}{2{(kp)}^2} {[(k^2+p^2)\delta H_1(k, p)-\delta H_3(k, p)]}, \\
w_2(k,p) &= \frac{1}{{(2kp)}^3}
{\{[3{(k^2+p^2)}^2-4k^2p^2]\delta H_1(k, p)-6(k^2+p^2)\delta H_3(k, p)+3\delta H_5(k, p)\} }.
\end{aligned}
```
## Two dimensions
For 2D, the interaction is expressed as Fourier series:
```math
\begin{aligned}
W(|\vec{k}-\vec{p}|)&=\frac{w_0}{2\pi} + \sum_{\ell=1}^{\infty} w_l(k, p) \frac{\cos[\ell(\theta_{\hat k} - \theta_{\hat p})]}{\pi} \,,\\
w_{\ell}(k, p) &= \int_{0}^{2\pi} d\theta \cos(\ell \theta) W(\sqrt{k^2+p^2-2kp\cos \theta}) \,. \\
\end{aligned}
```
Projecting Eq.(1), we have
```math
\begin{aligned}
&\frac{\Delta_{0}(k)}{2\pi}+ \sum_{\ell=1} \Delta_{\ell}(k) \frac{\cos(\ell \theta_{\hat k})}{\pi} +\overline{\Delta}_{\ell}(k) \frac{\sin(\ell \theta_{\hat k})}{\pi} \\
=& \int \frac{{\rm d}\vec p}{(2\pi)^2} \frac{w_{0}(k,p)}{2\pi} \frac{f_0(p)}{2\pi} + \sum_{\ell=1} w_{\ell}(k,p) \frac{\cos(\ell \theta_{\hat k})\cos(\ell \theta_{\hat p}) - \sin(\ell \theta_{\hat k})\sin(\ell \theta_{\hat p}) }{\pi} \sum_{\ell^\prime=1} \left[ f_{\ell^\prime}(p) \frac{\cos(\ell^\prime \theta_{\hat p})}{\pi} + \overline{f}_{\ell^\prime}(p) \frac{\sin(\ell^\prime \theta_{\hat p})}{\pi} \right] \\
=& \int \frac{pdp}{(2\pi)^2} w_{0}(k,p) \frac{f_0(p)}{2\pi} + \sum_{\ell=1} w_{\ell}(k,p) \left[f_{\ell}(p) \frac{\cos(\ell \theta_{\hat k})}{\pi} +\overline{f}_{\ell}(p) \frac{\sin(\ell \theta_{\hat k})}{\pi} \right] \,,
\end{aligned}
```
where ``\Delta_{\ell}(k) =\int_{0}^{2\pi} d\theta \cos(\ell \theta) \Delta(\vec k) ``. It leads to the decoupled equations with ``\ell`` channels.
The ``GW`` self energy corresponding to ``\ell=0`` is
```math
\Sigma(k) = \int \frac{p {\rm d}p}{(2\pi)^2} w_0(k, p) G(p) \,,
```
and for gap function we have
```math
\Delta_l(k) = \int \frac{p {\rm d}p}{(2\pi)^2} w_l(k, p) f_l(p) \,.
```
Since ``\sin \theta`` is lack in 2D integration, the helper functions are complicated and useless in 2D. Here, we directly calculate
``w_{\ell}(k,p)`` integration with a CompositeGrid (range ``[0, \pi]``, Log-dense at ``0``).
- standard 2D coulomb potential (3D coulomb interaction restricted in 2D): ``V=\frac{2\pi e^2}{q} \delta(\tau)\, \left( V(r)=\frac{e^2}{r} \right)`` has the same real-space form as in 3D electron gas.
- REAL 2D coulomb potential (obey Gauss's law in 2D): ``V=\frac{4\pi e^2}{q^2} \delta(\tau)\,\left(V(r)=-\ln \frac{r}{L}\right)`` has the same momentum-space form as in 3D electron gas.
- 2D Yukawa interaction has
```math
\begin{aligned}
V(r)&=\frac{e^2}{r} e^{-mr} ,\\
V(q)&=\int {\rm d}^2\vec r V(r) e^{i\vec q\cdot \vec r} \\
& = \int r dr \frac{e^2}{r} e^{-mr} \int_0^{2\pi} d\theta e^{iqr\cos \theta} \\
& = \int dr e^2 e^{-mr} 2\pi J_0(qr) \\
& = \frac{2\pi e^2}{\sqrt{q^2+m^2}} \,,
\end{aligned}
```
**Reference**:
1. Christopher Frye and Costas J. Efthimiou, Spherical Harmonics in ``p`` Dimensions, arXiv 1205.3548 | ElectronGas | https://github.com/numericalEFT/ElectronGas.jl.git |
|
[
"MPL-2.0"
] | 0.2.5 | 3a6ed6c55afc461d479b8ef5114cf6733f2cef56 | docs | 3444 | # Decomposition of Interaction in two dimensions
In GW-approximation, we calculation self-energy as
```math
\Sigma(\mathbf{k},\omega_n)=-T\int \frac{{\rm d}^d \mathbf{q}}{(2\pi)^d} \sum_m G(\mathbf{p},\omega_m)W(\mathbf{k-p},\omega_n-\omega_m) \,,
\tag{1}
```
where ``G`` is the Green's function and W is the effective interaction. Here, we suppress spin index.
## Spherical harmonic representation
We first express the ``W(q,\tau)`` function as an expansion in Legendre polynomials ``P_\ell(\chi)``
```math
\begin{gathered}
W(|\mathbf{k}-\mathbf{p}|, \tau)=\sum_{\ell=0}^{\infty} \bar{w}_{\ell}(k, p, \tau) P_{\ell}(\hat{k p}) \,, \\
\bar{w}_{\ell}(k, p, \tau)=\frac{N(d,\ell)}{2} \int_{-1}^{1} d \chi P_{\ell}(\chi) W\left(\sqrt{k^{2}+p^{2}-2 k p \chi} ,\tau\right)\,.
\end{gathered}
```
Since the Legendre polynomials of a scalar product of unit vectors can be expanded with spherical harmonics using
```math
P_{\ell}(\hat{k p})=\frac{\Omega_{d}}{N(d,\ell)} \sum_{m=1}^{N(d,\ell)} Y_{\ell m}(\hat{k}) Y_{\ell m}^{*}(\hat{p})\,,
```
where ``\Omega_{d}`` is the solid angle in ``d`` dimensions, and
```math
N(d, \ell)=\frac{2 \ell+d-2}{\ell}\left(
\begin{array}{c}
\ell+d-3 \\
\ell-1
\end{array}\right)
```
denotes the number of linearly independent homogeneous harmonic polynomials of degree ``\ell`` in ``d`` dimensions. The spherical harmonics are orthonormal as
```math
\int {\rm d}\Omega_{\hat k} Y_{\ell m}(\hat k) Y_{\ell^\prime m^\prime}(\hat k) = \delta_{\ell \ell^\prime} \delta_{mm^\prime} \,.
```
Hence, the ``W(q,\tau)`` function can expressed further on as
```math
W(|\mathbf{k}-\mathbf{p}|, \tau)=\sum_{\ell} \frac{\Omega_{d}}{N(d,\ell)} \bar{w}_{\ell}(k, p, \tau) \sum_{m} Y_{\ell m}(\hat{k}) Y_{\ell m}^{*}(\hat{p})
```
or
```math
W(|\mathbf{k}-\mathbf{p}|, \tau)= \frac{\Omega_{d}}{2}\sum_{\ell} w_{\ell}(k, p, \tau) \sum_{m} Y_{\ell m}(\hat{k}) Y_{\ell m}^{*}(\hat{p} )\,.
```
with
```math
w_{\ell}(k, p, \tau)=\int_{-1}^{1} d \chi P_{\ell}(\chi) W\left(\sqrt{k^{2}+p^{2}-2 k p \chi} ,\tau\right)
```
In addition, the Green's function ``G(\mathbf p, \tau)`` has
```math
\begin{aligned}
G(\mathbf p, \tau)= \sum_{\ell=0}^{\infty}\sum_{m=1}^{N(d,\ell)} G_{\ell m}(p,\tau) Y_{\ell m}(\hat p)\,, \\
G_{\ell m}(p,\tau) = \int {\rm d}\Omega_{\hat k} G(\mathbf p,\tau) Y^*_{\ell m}(\hat p)
\end{aligned}
```
### Decouple with channels
By the sphereical harmonic expansion of Eq.(1), the self-energy
```math
\begin{aligned}
\sum_{\ell m} \Sigma_{\ell m }(k,\tau)Y_{\ell m}(\hat k) &= \frac{\Omega_d}{2} \int \frac{{\rm d}\mathbf p}{(2\pi)^d} \sum_{\ell m}G_{\ell m}(p, \tau) Y_{\ell m}(\hat p) \sum_{\ell^\prime m^\prime} w_{\ell^\prime}(k,p,\tau) Y_{\ell^\prime m^\prime}(\hat{k}) Y_{\ell^\prime m^\prime}^{*}(\hat{p} ) \\
&=\frac{\Omega_d}{2} \sum_{\ell m} \int \frac{p^{d-1}dp}{(2\pi)^d} G_{\ell m}(p, \tau) w_{\ell}(k,p,\tau) Y_{\ell m}(\hat k)
\end{aligned}
```
Since self-energy is symmertric with ``\hat k``, we just need to project on the s-wave channel, namely
```math
\Sigma(k,\tau) =\frac{\Omega_d}{2} \int \frac{p^{d-1}dp}{(2\pi)^d} G(p,\tau) w_0(k,p,\tau)
```
## Two dimensions
```math
\begin{aligned}
N(2,\ell) &= 2 \\
Y_{\ell 1}(\hat k) &= \cos(\ell \theta),\; Y_{\ell 2}(\hat k) = \sin(\ell \theta) \\
P_{\ell}(\hat{kp}) &= \pi \cos[\ell(\theta_{\hat k} - \theta_{\hat p})]
\end{aligned}
```
Hence, the self-energy is
```math
\Sigma(k,\tau) = \int \frac{pdp}{4\pi} G(p,\tau)w_0(k,p,\tau)
``` | ElectronGas | https://github.com/numericalEFT/ElectronGas.jl.git |
|
[
"MPL-2.0"
] | 0.2.5 | 3a6ed6c55afc461d479b8ef5114cf6733f2cef56 | docs | 2677 |
# Legendre Decomposition of Interaction
## Why decompose?
In many cases we need to calculate integrals of the following form:
```math
\begin{aligned}
\Delta(\vec{k}) = \int \frac{d^dp}{{2\pi}^d} W(|\vec{k}-\vec{p}|) F(\vec{p}),
\end{aligned}
```
where only momentum dependency is of our concern.
In such cases we assume that the interaction depends only on the magnitude of
the momentum difference, and the other part of the integrand has some sort of
space symmetry.
Two typical cases are the calculation of self-energy and gap-function equation.
In the case of self-energy, the other part of the integrand depends only on the
magnitude of the momentum, thus we can integrate out the angular dependence to
simplify the calculation. In the case of gap-function equation, we do not know
priorly the symmetry of anomalous correlator, but we can always decompose the
interaction and anomalous correlator into angular momentum channels. In that sense
the self-energy case is actually a special case where we only care about s-channel.
## How?
From now on we illustrate 3D case.
We first express the interaction in Legendre polynomials:
```math
\begin{aligned}
W(|\vec{k}-\vec{p}|)&=\sum_{l=0}^{\infty}\frac{2l+1}{2} w_l(k, p) p_{l}(\hat{kp}),\\
&=2\pi\sum_{l,m} w_l(k, p) Y_{lm}(\hat{k}/k)Y_{lm}^{*}(\hat{p}/p).
\end{aligned}
```
with
```math
\begin{aligned}
w_l(k, p) = \int_{-1}^{1}d\chi P_l(\chi) W(\sqrt{k^2+p^2-2kp\chi}).
\end{aligned}
```
The other part of the integrand could also be decomposed as
```math
\begin{aligned}
f_{lm}(k) = \int d\hat{k}Y_{lm}(\hat{k}/k)F(\vec{k})
\end{aligned}
```
In the cases mentioned above there's no dependency on ``m``, thus the whole calculation
could be decomposed with ``l``.
For gap-function equation we have
```math
\begin{aligned}
\Delta_l(k) = \int \frac{p^2dp}{{4\pi}^2} w_l(k, p) f_l(p),
\end{aligned}
```
and for self-energy we have
```math
\begin{aligned}
\Sigma(k) = \int \frac{p^2dp}{{4\pi}^2} w_0(k, p) G(p).
\end{aligned}
```
## Helper function
We can do the decomposition with helper function:
```math
\begin{aligned}
H_n(y) = \int_{0}^{y} z^n W(z)dz.
\end{aligned}
```
Then
```math
\begin{aligned}
\int_{|k-p|}^{k+p} z^n W(z)dz = H_n(k+p)-H_n(|k-p|)= \delta H_n(k, p).
\end{aligned}
```
Thus all integral for ``w_l`` could be expressed as combination of helper functions:
```math
\begin{aligned}
w_0(k,p) &= \frac{1}{kp} \delta H_1(k, p), \\
w_1(k,p) &= \frac{1}{2{(kp)}^2} {[(k^2+p^2)\delta H_1(k, p)-\delta H_3(k, p)]}, \\
w_2(k,p) &= \frac{1}{{(2kp)}^3}
{\{[3{(k^2+p^2)}^2-4k^2p^2]\delta H_1(k, p)-6(k^2+p^2)\delta H_3(k, p)+3\delta H_5(k, p)\} }.
\end{aligned}
```
| ElectronGas | https://github.com/numericalEFT/ElectronGas.jl.git |
|
[
"MPL-2.0"
] | 0.2.5 | 3a6ed6c55afc461d479b8ef5114cf6733f2cef56 | docs | 1705 | # Polarization of free electron in two dimensions
We consider the polarziation of free electron gas in 2D,
```math
\Pi_0(q, \Omega_n)
=-S\int\frac{d^2 \vec k}{{(2\pi)}^2}
\frac{n(\epsilon_{\vec{k}})-n(\epsilon_{\vec{k}+\vec{q}})}{i\Omega_n+\epsilon_{\vec{k}}-\epsilon_{\vec{k}+\vec{q}}}\\
```
where ``n(\vec k) = 1/(e^{\beta(\epsilon_{\vec k}-\mu)}+1)``, ``\epsilon_{\vec k}=k^2/(2m)-\mu``, and ``S`` is the spin factor. It is expressed as
```math
\Pi_0(q, \Omega_n)=-S\int_0^{\infty} \frac{mkdk}{2\pi^2} n(\epsilon_k) \int_{0}^{2\pi} d\theta \left[ \frac{1}{i2m\Omega_n-2kq \cos\theta-q^2}-\frac{1}{i2m\Omega_n-2kq\cos \theta+q^2}\right]
```
## Static limit ``\Omega_n=0``
For real ``a,b``, the integral ``\int_0^{2\pi}\frac{1}{a-b\cos \theta}=\frac{2\pi}{\sqrt{a^2-b^2}}`` and ``\int_0^{2\pi}\frac{1}{a+b\cos \theta}=C\frac{2\pi}{\sqrt{a^2-b^2}}`` where ``C=1`` for ``a>b`` and ``C=-1`` for ``a<b``. Hence, we have
```math
\Pi_0(q, 0)= -S\int_0^{q/2}\frac{mkdk}{2\pi^2} n(\epsilon_k) \frac{4\pi}{\sqrt{q^4-4k^2q^2}}
```
At zero temperature,
- for ``q<2k_F``,
```math
\Pi_0(q, 0)=-S\int_0^{q/2} \frac{m}{\pi q} \frac{dk^2}{\sqrt{q^2-4k^2}}=-\int_0^1\frac{mS}{4\pi}\frac{dx}{\sqrt{1-x}}=-\frac{mS}{2\pi} \,;
```
- for ``q>2k_F``,
```math
\Pi_0(q, 0)=-S\int_0^{k_F} \frac{m}{\pi q} \frac{dk^2}{\sqrt{q^2-4k^2}}=-\int_0^{4k_F^2/q^2}\frac{mS}{4\pi}\frac{dx}{\sqrt{1-x}}=-\frac{mS}{2\pi}\left( 1-\sqrt{1-\frac{4k_F^2}{q^2}}\right) \,.
```
## Zero temperature
```math
\Pi_0(q, \Omega_n)=-\frac{mS}{2\pi} \left[1-\frac{2k_F}{q}{\rm Re}\sqrt{\left(\frac{q}{2k_F}+ i \frac{m\Omega_n}{qk_F}\right)^2-1} \right] \,.
```
For ``q\to 0`` and ``\Omega_n \neq 0``, ``\Pi_0(q, \Omega_n) \to 0``. | ElectronGas | https://github.com/numericalEFT/ElectronGas.jl.git |
|
[
"MPL-2.0"
] | 0.2.5 | 3a6ed6c55afc461d479b8ef5114cf6733f2cef56 | docs | 4491 |
# Polarization of free electron
## Generic formalism
The bare polarization is defined as
```math
\begin{aligned}
\Pi_0(\Omega_n, \vec{q})=
-S T\sum_{m}\int\frac{d^d k}{{(2\pi)}^d}G_0(\omega_m, \vec{k})G_0(\omega_m+\Omega_n, \vec{k}+\vec{q})
\end{aligned}
```
in the matsubara representation, where ``S`` is spin number, ``G_0`` is bare Green's function.
We have
```math
\begin{aligned}
G_0(\omega_m, \vec{k})=\frac{1}{i\omega_m-\epsilon_{\vec{k}}}
\end{aligned}
```
with bare electron dispersion given by ``\epsilon_{\vec{k}}``. By summing over frequency we have
```math
\begin{aligned}
\Pi_0(\Omega_n, \vec{q}) &= -S \int\frac{d^d \vec k}{{(2\pi)}^d}
\frac{n(\epsilon_{\vec{k}+\vec{q}})-n(\epsilon_{\vec{k}})}{i\omega_n-\epsilon_{\vec{k}+\vec{q}}+\epsilon_{\vec{k}}} \\
&=-S\int \frac{d^d \vec k}{{(2\pi)}^d} n(\epsilon_{\vec k}) \left[ \frac{1}{i\Omega+\epsilon_{\vec k}-\epsilon_{\vec k+\vec q}}-\frac{1}{i\Omega+\epsilon_{\vec k-\vec q}-\epsilon_{\vec k}}\right]
\end{aligned}
```
with the fermi distribution function
```math
n(\epsilon_{\vec k}) =\frac{1}{e^{\beta\varepsilon_{\vec{k}}}+1}
```
## Free electron in 3D
From now on we consider free electron in 3D, where ``d=3`` and dispersion ``\epsilon_{\vec{k}}=k^2/2m-\mu``. We have
```math
\begin{aligned}
\Pi_0(\Omega, \vec{q})&=-S\int_0^{\infty} \frac{k^2dk}{4\pi^2} n(\epsilon_k) \int_{-1}^{1} d(\cos \theta) \left[ \frac{1}{i\Omega+\epsilon_{\vec k}-\epsilon_{\vec k+\vec q}}-\frac{1}{i\Omega+\epsilon_{\vec k-\vec q}-\epsilon_{\vec k}}\right]\\
&=-S\int_0^{\infty} \frac{k^2dk}{4\pi^2} n(\epsilon_k) \frac{m}{kq}\ln\frac{4m^2\Omega^2+(q^2-2kq)^2}{4m^2\Omega^2+(q^2+2kq)^2} \,,
\end{aligned}
```
which could be handled with one dimensional integral of ``k``.
- In the limit ``q^2+2k_F q \ll 2m\Omega_n ``, the intergrand of ``\Pi_0`` is expanded as
```math
\frac{m}{kq}\ln\frac{4m^2\Omega^2+(q^2-2kq)^2}{4m^2\Omega^2+(q^2+2kq)^2}=-\frac{2q^2}{m\Omega^2}+\frac{2k^2q^4}{m^3\Omega^4}+\frac{(-4k^2+m^2\Omega^2)q^6}{2m^5\Omega^6}+...
```
- Zero temperature polarization can be calculated explicitly
```math
\Pi_0(\Omega,q) = -\frac{N_F}{2}\left[1-\frac{1}{8 k_F q}\left\{ \left[\frac{(i2m\Omega-q^2)^2}{q^2}-4 k_F^2\right]\log\left(\frac{i2m\Omega-q^2-2 k_F q}{i2m\Omega-q^2+2 k_F q}\right)+\left[\frac{(i2m\Omega+q^2)^2}{q^2}-4 k_F^2\right]\log\left(\frac{i2m\Omega+q^2+2 k_F q}{i2m\Omega+q^2-2 k_F q}\right)\right\}\right]
```
- In the static limit ``\Omega=0``,
```math
\Pi_0(0, q) = -N_F F(q/2k_F) \,,
```
where ``N_F=Smk_F/(2\pi^2)`` is the density of states, and ``F(x)=\frac{1}{2}-\frac{x^2-1}{4x}\ln \left|\frac{1+x}{1-x}\right|`` is the Lindhard function.
The weak logarithmic singularity near ``2k_F`` is the cause of the Friedel oscillation and Kohn-Luttinger superconductivity.
## Polarization in the large frequency limit ``\Omega \gg q v_F``
As derived in [[polarization (free electron)#Generic formalism]]
- In the Matsubara representation,
```math
P_{q, \Omega}=S\int \frac{d^D k}{(2\pi)^D} \frac{n(\epsilon_k)-n(\epsilon_{k+q})}{i\Omega+\epsilon_k-\epsilon_{k+q}}
```
```math
P_{q, \Omega}=S\int \frac{d^D k}{(2\pi)^D} n(\epsilon_k) \left[ \frac{1}{i\Omega+\epsilon_k-\epsilon_{k+q}}-\frac{1}{i\Omega+\epsilon_{k-q}-\epsilon_{k}}\right]
```
Consider the limit ``\Omega \gg q v_F``,
```math
P_{q, \Omega}=\frac{S}{i\Omega}\int \frac{d^D k}{(2\pi)^D} n(\epsilon_k) \left[ \frac{1}{1-\Lambda_q/i\Omega}-\frac{1}{1+\Lambda_q/i\Omega}\right]
```
where ``\Lambda_q=\epsilon_{k+q}-\epsilon_k=(2k \cdot q+q^2)/2m``
```math
P_{q, \Omega}=\frac{2S}{(i\Omega)^2}\int \frac{d^D k}{(2\pi)^D} n(\epsilon_k) \left[\Lambda_q+\Lambda_q^3/(i\Omega)^2+...\right]=\frac{n}{m}\frac{q^2}{(i\Omega)^2}\left[1+O\left(\frac{q^2}{(i\Omega)^2}\right)\right]
```
where is exact in arbitrary dimensions and the electron density ``n=S\int \frac{d^D k}{(2\pi)^D} n(\epsilon_k)``.
- The correction term is ``\left[\frac{3}{5}(q v_F)^2+\epsilon_q^2\right]/(i\Omega)^2`` for 3D.
In real frequency,
```math
\operatorname{Re} P_{q, \omega} = \frac{n}{m}\frac{q^2}{\omega^2}\left[1+O\left(\frac{q^2}{\omega^2}\right)\right]
```
### Plasmon frequency
Plasmon dispersion is the zeros of the dynamic dielectric function,
```math
\epsilon=1-v_q P_{q, \omega}=0
```
where ``v_q=4\pi e^2/q^2``.
The dispersion of the plasmon is,
```math
\omega_p^2=\frac{4\pi e^2n}{m}\left[1+O\left(\frac{q^2}{\omega^2}\right)\right]
```
Plasmon emerges only if ``\epsilon_q \cdot v_q \sim \text{constant}``.
| ElectronGas | https://github.com/numericalEFT/ElectronGas.jl.git |
|
[
"MPL-2.0"
] | 0.2.5 | 3a6ed6c55afc461d479b8ef5114cf6733f2cef56 | docs | 3461 | # Polarization Approximation
In this note, we discuss serveral approximations of the polarization
## Linearized Dispersion
The original polarization is given by,
```math
\begin{aligned}
\Pi_0(\Omega_n, \vec{q}) &= -S \int\frac{d^d \vec k}{{(2\pi)}^d}
\frac{n(\epsilon_{\vec{k}+\vec{q}})-n(\epsilon_{\vec{k}})}{i\omega_n-\epsilon_{\vec{k}+\vec{q}}+\epsilon_{\vec{k}}} \\
&=-S\int \frac{d^d \vec k}{{(2\pi)}^d} n(\epsilon_{\vec k}) \left[ \frac{1}{i\Omega+\epsilon_{\vec k}-\epsilon_{\vec k+\vec q}}-\frac{1}{i\Omega+\epsilon_{\vec k-\vec q}-\epsilon_{\vec k}}\right].
\end{aligned}
```
One possible approximation is to replace the kinetic energy with a dispersion linearized near the Fermi surface,
```math
\xi_{\mathbf{p}+\mathbf{q}}-\xi_{\mathbf{p}}=(1 / m) \mathbf{p} \cdot \mathbf{q}+\mathcal{O}\left(q^{2}\right)
```
so that,
```math
n_{\mathrm{F}}\left(\epsilon_{\mathbf{p}+\mathbf{q}}\right)-n_{\mathrm{F}}\left(\epsilon_{\mathbf{p}}\right) \simeq \partial_{\epsilon_p} n_{\mathrm{F}}\left(\epsilon_{p}\right)(1 / m) \mathbf{p} \cdot \mathbf{q} \simeq-\delta\left(\epsilon_{p}-\mu\right)(1 / m) \mathbf{p} \cdot \mathbf{q}
```
where, in the zero-temperature limit, the last equality becomes exact. Converting the momentum sum into an integral, we thus obtain
```math
\Pi_0(\mathbf{q}, \omega_{m})=-S \int \frac{d^{3} p}{(2 \pi)^{3}} \delta\left(\epsilon_{p}-\mu\right) \frac{\frac{1}{m} \mathbf{p} \cdot \mathbf{q}}{i \omega_{m}+\frac{1}{m} \mathbf{p} \cdot \mathbf{q}}.
```
Evaluate the integral gives
```math
\begin{aligned}
\Pi_0(\mathbf{q}, \omega_{m}) &=-\frac{S}{(2 \pi)^{3}} \int d p p^{2} \int d \delta\left(\epsilon_{p}-\mu\right) \frac{v_{\mathrm{F}} \mathbf{n} \cdot \mathbf{q}}{i \omega_{m}+v_{\mathrm{F}} \mathbf{n} \cdot \mathbf{q}} \\
&=-\underbrace{\frac{S}{(2 \pi)^{3}} \int d p p^{2} \int d \delta\left(\epsilon_{p}-\mu\right)}_{N_F} \frac{1}{\int d\Omega} \int d\Omega \frac{v_{\mathrm{F}} \mathbf{n} \cdot \mathbf{q}}{i \omega_{m}+v_{\mathrm{F}} \mathbf{n} \cdot \mathbf{q}} \\
&=-\frac{N_F}{2} \int_{-1}^{1} d x \frac{v_{\mathrm{F}} x q}{i \omega_{m}+v_{\mathrm{F}} x q}=-N_F\left[1-\frac{i \omega_{m}}{2 v_{\mathrm{F}} q} \ln \left(\frac{i \omega_{m}+v_{\mathrm{F}} q}{i \omega_{m}-v_{\mathrm{F}} q}\right)\right] .
\end{aligned}
```
The above derivation is adapted from the A. Altland and B. Simons' book "Condensed Matter Field Theory" Chapter 5.2, Eq. (5.30).
### Two limits:
For the exact free-electron polarization, we expect
In the limit ``q ≫ ω_m``,
```math
Π_0(q, iω_m) \rightarrow -N_F \left(1-\frac{π}{2}\frac{|ω_m|}{v_{\mathrm{F}} q}\right)
```
where we use the Taylor expansion for ``\text{Log}\left[\frac{1+i x}{-1+i x}\right]`` where ``x=\omega_m/(v_{\mathrm{F}} q)``,
```math
\begin{array}{cc}
\{ &
\begin{array}{cc}
-i \pi +2 i x-\frac{2 i x^3}{3}+O\left(x^4\right) & x \ge 0 \\
i \pi +2 i x-\frac{2 i x^3}{3}+O\left(x^4\right) & x<0 \\
\end{array}
\\
\end{array}
```
and in the limit ``q ≪ ω_m``,
```math
Π_0(q, iω_m) \rightarrow -\frac{N_F}{3}\left(\frac{v_{\mathrm{F}} q}{ω_m}\right)^2 = -N_F \left(\frac{q}{q_{\mathrm{TF}}}\frac{\omega_p}{\omega_m}\right)^2
```
where the plasma-frequency and the Thomas-Fermi screening momentum is related by ``ω_p=v_F q_{\mathrm{TF}}/\sqrt{3}``.
## Plasma Approximation
It is sometimes convenient to approximate the polarization with the plasma poles,
```math
Π_0(q, iω_m) \approx -N_F \frac{(q/q_{\mathrm{TF}})^2}{(q/q_{\mathrm{TF}})^2+(\omega_m/\omega_p)^2}
``` | ElectronGas | https://github.com/numericalEFT/ElectronGas.jl.git |
|
[
"MPL-2.0"
] | 0.2.5 | 3a6ed6c55afc461d479b8ef5114cf6733f2cef56 | docs | 3029 |
# Quasiparticle properties of electon gas
## Renormalization factor
The renormalization constant $Z$ gives the strength of the quasiparticle pole, and can be obtained from the frequency dependence of the self-energy as
```math
Z=\frac{1}{1-\left.\frac{1}{\hbar} \frac{\partial \operatorname{Im} \Sigma(k, i\omega_n)}{\partial \omega_n}\right|_{k=k_{F}, \omega_n=0^+}}
```
## Effective mass
```math
\frac{m^{*}}{m}= \frac{Z^{-1}}{1+\frac{m}{\hbar^{2} k_{F}} \left. \frac{\partial \operatorname{Re} \Sigma(k, i\omega_n)}{\partial k}\right|_{k=k_{F}, \omega_n=0^+}}
```
## Benchmark
### 2D UEG
| $r_s$ | $Z$ (RPA) | $Z$ ($G_0W_0$ [1]) | $m^*/m$ (RPA) | $m^*/m$ ($G_0W_0$ [1]) |
| :---: | :-------: | :----------------: | :-----------: | :--------------------: |
| 0.5 | 0.787 | 0.786 | | 0.981 |
| 1.0 | 0.662 | 0.662 | | 1.020 |
| 2.0 | 0.519 | 0.519 | | 1.078 |
| 3.0 | 0.437 | 0.437 | | 1.117 |
| 4.0 | 0.383 | 0.383 | | 1.143 |
| 5.0 | 0.344 | 0.344 | | 1.162 |
| 8.0 | 0.271 | 0.270 | | 1.196 |
| 10.0 | 0.240 | 0.240 | | 1.209 |
### 3D UEG
| $r_s$ | $Z$ (RPA) | $Z$ ($G_0W_0$) | $m^*/m$ (RPA) [5] | $m^*/m$ ($G_0W_0$ [2]) |
| :---: | :-------: | :-------------------------: | :---------------: | :--------------------: |
| 1.0 | 0.8601 | 0.859 [**3**] | 0.9716(5) | 0.970 |
| 2.0 | 0.7642 | 0.768 [**3**] 0.764 [**4**] | 0.9932(9) | 0.992 |
| 3.0 | 0.6927 | 0.700 [**3**] | 1.0170(13) | 1.016 |
| 4.0 | 0.6367 | 0.646 [**3**] 0.645 [**4**] | 1.0390(10) | 1.039 |
| 5.0 | 0.5913 | 0.602 [**3**] | 1.0587(13) | 1.059 |
| 6.0 | 0.5535 | 0.568 [**3**] | 1.0759(12) | 1.078 |
[**References**]
1. [H.-J. Schulze, P. Schuck, and N. Van Giai, Two-dimensional electron gas in the random-phase approximation with exchange and self-energy corrections. *Phys. Rev. B 61, 8026* (2000).](https://link.aps.org/doi/10.1103/PhysRevB.61.8026)
2. [Simion, G. E. & Giuliani, G. F., Many-body local fields theory of quasiparticle properties in a three-dimensional electron liquid. *Phys. Rev. B 77, 035131* (2008).](https://journals.aps.org/prb/abstract/10.1103/PhysRevB.77.035131)
3. G. D Mahan, *Many-Particle Physics* (Plenum, New York, 1991), Chap. 5.
4. [B. Holm and U. von Barth, Fully self-consistent GW self-energy of the electron gas. *Phys. Rev. B 57, 2108* (1998).](https://journals.aps.org/prb/abstract/10.1103/PhysRevB.57.2108)
5. Calculated at the temperature $T=T_F/1000$
| ElectronGas | https://github.com/numericalEFT/ElectronGas.jl.git |
|
[
"MIT"
] | 0.1.0 | edb0ba263d535e371c40144058c4214836f05867 | code | 184 | module DashBioUtils
using Glob, GZip
using HTTP, StringEncodings, FASTX
using StatsBase
include("ngl_parser.jl")
include("protein_reader.jl")
include("xyz_reader.jl")
end # module
| DashBioUtils | https://github.com/plotly/DashBioUtils.jl.git |
|
[
"MIT"
] | 0.1.0 | edb0ba263d535e371c40144058c4214836f05867 | code | 3098 | #=
This module contains functions that parse and structure data into a
dict for use with the NGL Molecule Viewer component.
One or multiple input data files in the PDB or .cif.gz format can be
entered to return a dict to input as the `data` param of the component.
=#
"""
single_split(string, sep)
Helper function to split `string` into 2 sub string based on `sep`
# Example
```julia
a,b = DashBioUtils.NglParser.single_split("hello.app",".")
```
"""
function single_split(string, sep)
parts = split(string, sep)
if length(parts) > 2
ct = count(c -> c == only(sep), collect(string))
throw("expected $(sep) once, found $(ct) in $(string)")
end
return parts
end
"""
get_highlights(string, sep, atom_indicator)
Helper function to set highlights using `string`, `sep`, `atom_indicator`
# Example
a,b = DashBioUtils.NglParser.get_highlights("Hello.app", ".", "a")
"""
function get_highlights(string, sep, atom_indicator)
residues_list = []
atoms_list = []
str_, _str = single_split(string, sep)
for e in split(_str, ",")
if occursin(atom_indicator, e)
push!(atoms_list, e)
else
push!(residues_list, e)
end
end
return (str_, Dict("atoms" => join(atoms_list, ","), "residues" => join(residues_list, ",")))
end
"""
get_data(data_path, pdb_id, color; reset_view=false, lc=true)
"""
function get_data(data_path, pdb_id, color; reset_view=false, loc=true)
chain = "ALL"
aa_range = "ALL"
highlight_dic = Dict("atoms" => "", "residues" => "")
# Check if only one chain should be shown
if occursin(".", pdb_id)
pdb_id, chain = single_split(pdb_id, ".")
highlights_sep = "@"
atom_indicator = "a"
# Check if only a specified amino acids range should be shown:
if occursin(":", chain)
chain, aa_range = single_split(chain, ":")
# Check if atoms should be highlighted
if occursin(highlights_sep,aa_range)
aa_range, highlight_dic = get_highlights(
aa_range, highlights_sep, atom_indicator
)
end
else
if occursin(highlights_sep, chain)
chain, highlight_dic = get_highlights(
chain, highlights_sep, atom_indicator
)
end
end
end
if loc
fname = [f for f in glob(string(data_path, pdb_id, ".*"))][1]
if occursin("gz", fname)
ext = split(fname, ".")[end-1]
f = GZip.gzopen(fname,"r")
rf = GZip.readline(f)
content = decode(rf, "UTF-8")
else
ext = split(fname, ".")[end]
f = GZip.gzopen(fname,"r")
content = GZip.readline(f)
end
else
fname = string(single_split(pdb_id, ".")[1], ".pdb")
ext = split(fname, ".")[end]
req = HTTP.request("GET", string(data_path, fname))
content= decode(req.body, "UTF-8")
end
return Dict(
"filename" => split(fname, "/")[end],
"ext" => ext,
"selectedValue" => pdb_id,
"chain" => chain,
"aaRange" => aa_range,
"chosen" => highlight_dic,
"color" => color,
"config" => Dict("type" => "text/plain", "input" => content),
"resetView" => reset_view,
"uploaded" => false,
)
end
| DashBioUtils | https://github.com/plotly/DashBioUtils.jl.git |
|
[
"MIT"
] | 0.1.0 | edb0ba263d535e371c40144058c4214836f05867 | code | 4174 |
#=
This module includes functions that extract data from FASTA files into
dictionaries.
Attributes: _DATABASES (dict): A dictionary that translates the
database sepecified in the description line of the FASTA file to
its constituent metadata fields.
=#
# information on database header formats, taken from
# https://en.wikipedia.org/wiki/FASTA_format
_DATABASES = Dict(
"gb" => ["accession", "locus"],
"emb"=> ["accession", "locus"],
"dbj" => ["accession", "locus"],
"pir" => ["entry"],
"prf" => ["name"],
"sp" => ["accession", "entry name", "protein name", "organism name",
"organism identifier", "gene name", "protein existence",
"sequence version"],
"tr" => ["accession", "entry name", "protein name", "organism name",
"organism identifier", "gene name", "protein existence",
"sequence version"],
"pdb" => ["entry", "chain"],
"pat" => ["country", "number"],
"bbs" => ["number"],
"gnl" => ["database", "identifier"],
"ref" => ["accession", "locus"],
"lcl" => ["identifier"],
"nxp" => ["identifier", "gene name", "protein name", "isoform name"]
)
"""
read_fasta(datapath_or_datastring; is_datafile=true)
Read a file in FASTA format, either from a file or from a string of raw
data.
`datapath_or_datastring`: Either the path to the FASTA file (can be relative
or absolute), or a string corresponding to the content
of a FASTA file (including newline characters).
`is_datafile`: Either `true` (default) if passing the filepath to the data,
or `false` if passing a string of raw data.
`return`: A list of protein objects, each containing a
description (based on the header line) and the amino
acid sequence with, optionally, all non-amino-acid
letters removed.
"""
function read_fasta(datapath_or_datastring::String; is_datafile=true)
raw_data = []
# open file if given a path
if is_datafile
records = open(FASTA.Reader, datapath_or_datastring)
else
req = Base.download(datapath_or_datastring)
records = open(FASTA.Reader, req)
end
cl_records = collect(records)
len_records = length(collect(cl_records))
fasta_data = Vector{Dict}(undef,len_records)
for (idx, val) in enumerate(cl_records)
dt = decode(val.data, "UTF-8")
fasta_data[idx] = Dict("description" => decode_description(string(dt[val.identifier], dt[val.description])),
"sequence" => string(dt[val.sequence]))
end
return fasta_data
end
"""
decode_description(description)
Parse the first line of a FASTA file using the specifications of
several different database headers (in _DATABASES).
`description`: The header line with the initial `>``
removed.
`return`: A dictionary for which each key-value pair comprises
a property specified by the database used and the
value of that property given by the header. If the
database is not recognized, the keys are given as
'desc-n' where n is the position of the property.
"""
function decode_description(description)
if description == ""
return Dict("-1" => "no description")
end
decoded = Dict()
desc = split(description,"|")
if haskey(_DATABASES, desc[1])
db_info = _DATABASES[desc[1]]
if desc[1] in ["sp", "tr"]
decoded["accession"] = desc[2]
# using regex to get the other information
rs = match(
r"([^\s]+)(.*)\ OS=(.*)\ OX=(.*)\ GN=(.*)\ PE=(.*)\ SV=(.*)$", desc[3]
)
for i in 3:length(db_info)
decoded[db_info[i]] = rs.captures[i-2]
end
else
# shift by one, since first section in header describes
# the database
for i in 1:length(desc)-1
decoded[db_info[i]] = desc[i+1]
end
end
else
if length(desc) > 1
for i in 1:length(desc)-1
decoded[string(i)] = desc[i+1]
end
else
ecoded["Header"] = desc[0]
end
end
return decoded
end
| DashBioUtils | https://github.com/plotly/DashBioUtils.jl.git |
|
[
"MIT"
] | 0.1.0 | edb0ba263d535e371c40144058c4214836f05867 | code | 1618 | #=
XYZ reader
This module contains functions that can read an XYZ file and return a
Python dictionary with its contents.
=#
"""
read_xyz(datapath_or_datastring, is_datafile=true)
Read data in .xyz format, from either a file or a raw string.
`datapath_or_datastring`: Either the path to the XYZ file (can be relative
or absolute), or a string corresponding to the content
of an XYZ file (including newline characters).
`is_datafile`: Either True (default) if passing the filepath to the data,
or False if passing a string of raw data.
`return`: A list of the atoms in the order that
they appear on the file, stored in
objects with keys "symbol", "x", "y",
and "z".
"""
function read_xyz(datapath_or_datastring; is_datafile=true)
if is_datafile
records = decode(read(datapath_or_datastring),"UTF-8")
else
req = Base.download(datapath_or_datastring)
records = decode(read(req),"UTF-8")
end
lines = split(records, "\n")
atoms =Vector{Dict}(undef,0)
for line in lines
rs = match(
r"^\s*([\w]+)\s+([\w\.\+\-]+)\s+([\w\.\+\-]+)\s+([\w\.\+\-]+)\s*", string(line)
)
if !(rs isa Nothing) && (length(rs.captures) == 4)
atom = Dict(
"symbol" => rs.captures[1],
"x" => parse(Float64,rs.captures[2]),
"y" => parse(Float64,rs.captures[3]),
"z" => parse(Float64,rs.captures[4])
)
push!(atoms, atom)
end
end
return atoms
end | DashBioUtils | https://github.com/plotly/DashBioUtils.jl.git |
|
[
"MIT"
] | 0.1.0 | edb0ba263d535e371c40144058c4214836f05867 | docs | 170 | # DashBioUtils.jl
Helper function package **DashBio.jl**
### Installation
```julia
using Pkg
Pkg.add(url = "https://github.com/plotly/DashBioUtils.jl.git")
``` | DashBioUtils | https://github.com/plotly/DashBioUtils.jl.git |
|
[
"MIT"
] | 0.1.1 | e15ba568ac49d1ebc98578ed3bdd093a25ee7f2d | code | 684 | #PhysicalCommunications:
#-------------------------------------------------------------------------------
module PhysicalCommunications
include("functions.jl")
include("prbs.jl")
include("eyediag.jl")
#==Exported interface
===============================================================================#
export MaxLFSR #Create MaxLFSR iterator object.
export sequence #Builds sequences with LFSR objects
export sequence_detecterrors #Tests validity of bit sequence using sequence generator algorithm
export buildeye
#==Unexported interface
================================================================================
.DataEye #Stores eye data
==#
end # module
#Last line
| PhysicalCommunications | https://github.com/JuliaTelecom/PhysicalCommunications.jl.git |
|
[
"MIT"
] | 0.1.1 | e15ba568ac49d1ebc98578ed3bdd093a25ee7f2d | code | 2684 | #Eye diagram generation
#-------------------------------------------------------------------------------
#==Types
===============================================================================#
#=TODO: Restructure DataEye to be a vector of (x,y) vector pairs?
- Less likely to be in an invalid state.
- But less compatible with most plotting tool API (which tend to accept
vectors of x & y vectors).
- Possibly have an accessor function to "export" contents of DataEye
into an (x, y) tuple storing vectors of Float64[] for direct
compatibility with said plotting tools??
=#
mutable struct DataEye
vx::Vector{Vector{Float64}}
vy::Vector{Vector{Float64}}
end
DataEye() = DataEye([], [])
#==
===============================================================================#
#=TODO: - kwargs tbit, teye?:
Preferable to use named arguments, but can it be done cleanly without
being confusing to user?
Advanced features:
- Define algorithm to detect first crossing and center eye
(select `tstart`) accordingly.
=#
"""
buildeye(x::Vector, y::Vector, tbit::Number, teye::Number; tstart::Number=0)
Builds an eye diagram by folding `x` values of provided `(x,y)` into multiple windows of `teye` that start (are "triggered") every `tbit`.
Trace data is stored in a `DataEye` object as vectors of `(x,y)` sub-vectors.
Inputs:
- tbit: Bit period (s). Defines "trigger point" of each bit start.
- teye: Eye period (s). Defines how much of eye data (along x-axis) algorithm is to collect after "trigger point". Typically, this is set to `2.0*tbit`.
- tstart: Time of first "trigger point" (s).
Example plotting with Plots.jl:
#Assumption: (x, y) data generated here.
tbit = 1e-9 #Assume data bit period is 1ns.
#Build eye & use tstart to center data.
eye = buildeye(x, y, tbit, 2.0*tbit, tstart=0.2*tbit)
plot(eye.vx, eye.vy)
"""
function buildeye(x::Vector, y::Vector, tbit::Number, teye::Number; tstart::Number=0)
eye = DataEye()
i = 1
#skip initial data:
while i <= length(x) && x[i] < tstart
i+=1
end
wndnum = 0
inext = i
while true
if inext > length(x); break; end #Nothing else to add
wndstart = tstart+wndnum*tbit
nexteye = wndstart+tbit
wndend = wndstart+teye
istart = inext
i = istart
while i <= length(x) && x[i] < nexteye
i+=1
end
inext = i
while i <= length(x) && x[i] <= wndend
i+=1
end
if i > length(x)
i = length(x)
end
if x[i] > wndend
i -= 1
end
if i > istart
push!(eye.vx, x[istart:i].-wndstart)
push!(eye.vy, y[istart:i])
end
wndnum += 1
end
return eye
end
#Last line
| PhysicalCommunications | https://github.com/JuliaTelecom/PhysicalCommunications.jl.git |
|
[
"MIT"
] | 0.1.1 | e15ba568ac49d1ebc98578ed3bdd093a25ee7f2d | code | 825 | #PhysicalCommunications function tools
#-------------------------------------------------------------------------------
#==Basic tools
===============================================================================#
#Get the value of a particular keyword in the list of keyword arguments:
getkwarg(kwargs::Base.Iterators.Pairs, s::Symbol) = get(kwargs, s, nothing)
#==Ensure interface (similar to assert)
===============================================================================#
#=Similar to assert. However, unlike assert, "ensure" is not meant for
debugging. Thus, ensure is never meant to be compiled out.
=#
function ensure(cond::Bool, err)
if !cond; throw(err); end
end
#Conditionnally generate error using "do" syntax:
function ensure(fn::Function, cond::Bool)
if !cond; throw(fn()); end
end
#Last line
| PhysicalCommunications | https://github.com/JuliaTelecom/PhysicalCommunications.jl.git |
|
[
"MIT"
] | 0.1.1 | e15ba568ac49d1ebc98578ed3bdd093a25ee7f2d | code | 6472 | #PhysicalCommunications: Pseudo-Random Bit Sequence Generators/Checkers
#-------------------------------------------------------------------------------
#==Constants
===============================================================================#
#Integer representation of polynomial x^p1 + x^p2 + x^p3 + x^p4 + 1
_poly(p1::Int) = one(UInt64)<<(p1-1)
_poly(p1::Int, p2::Int) = _poly(p1) + _poly(p2)
_poly(p1::Int, p2::Int, p3::Int, p4::Int) = _poly(p1) + _poly(p2) + _poly(p3) + _poly(p4)
#==Maximum-length Linear-Feedback Shift Register (LFSR) polynomials/taps (XNOR form)
Ref: Alfke, Efficient Shift Registers, LFSR Counters, and Long Pseudo-Random
Sequence Generators, Xilinx, XAPP 052, v1.1, 1996.==#
const MAXLFSR_POLYNOMIAL = [
_poly(64,64) #1: not supported
_poly(64,64) #2: not supported
_poly(3,2) #3
_poly(4,3) #4
_poly(5,3) #5
_poly(6,5) #6
_poly(7,6) #7
_poly(8,6,5,4) #8
_poly(9,5) #9
_poly(10,7) #10
_poly(11,9) #11
_poly(12,6,4,1) #12
_poly(13,4,3,1) #13
_poly(14,5,3,1) #14
_poly(15,14) #15
_poly(16,15,13,4) #16
_poly(17,14) #17
_poly(18,11) #18
_poly(19,6,2,1) #19
_poly(20,17) #20
_poly(21,19) #21
_poly(22,21) #22
_poly(23,18) #23
_poly(24,23,22,17) #24
_poly(25,22) #25
_poly(26,6,2,1) #26
_poly(27,5,2,1) #27
_poly(28,25) #28
_poly(29,27) #29
_poly(30,6,4,1) #30
_poly(31,28) #31
_poly(32,22,2,1) #32
]
#==Types
===============================================================================#
abstract type SequenceGenerator end #Defines algorithm used by sequence() to create a bit sequence
abstract type PRBSGenerator <: SequenceGenerator end #Specifically a pseudo-random bit sequence
#Define supported algorithms:
struct MaxLFSR{LEN} <: PRBSGenerator; end #Identifies a "Maximum-Length LFSR" algorithm
#Define iterator & state objects:
struct MaxLFSR_Iter{LEN,TRESULT} #LFSR "iterator" object
seed::UInt64 #Initial state (easier to define here than creating state in parallel)
mask::UInt64 #Store mask value since it cannot easily be statically evaluated.
len::Int
end
mutable struct MaxLFSR_State{LEN}
reg::UInt64 #Current state of LFSR register
bitsrmg::Int #How many bits left to generate
end
#==Constructors
===============================================================================#
"""
MaxLFSR(reglen::Int)
Construct an object used to identify the Maximum-length LFSR algorithm of a given shift register length, `reglen`.
"""
MaxLFSR(reglen::Int) = MaxLFSR{reglen}()
#==Helper functions:
===============================================================================#
#Find next bit & update state:
function _nextbit(state::MaxLFSR_State{LEN}, polymask::UInt64) where LEN
msb = UInt64(1)<<(LEN-1) #Statically compiles if LEN is known
#Mask out all "non-tap" bits:
reg = state.reg | polymask
bit = msb
for j in 1:LEN
bit = ~xor(reg, bit)
reg <<= 1
end
bit = UInt64((bit & msb) > 0) #Convert resultant MSB to an integer
state.reg = (state.reg << 1) | bit #Leaves garbage @ bits above LEN
state.bitsrmg -= 1
return bit
end
#Core algorithm for sequence() function (no kwargs):
function _sequence(::MaxLFSR{LEN}, seed::UInt64, len::Int, output::DataType) where LEN
ensure(in(LEN, 3:32), ArgumentError("Invalid LFSR register length, $LEN: 3 <= length <= 32"))
ensure(LEN < 64,
OverflowError("Cannot build sequence for MaxLFSR{LEN} with LEN=$LEN >= 64."))
availbits = (UInt64(1)<<LEN)-UInt64(1) #Available LFSR bits
ensure((seed & availbits) == seed,
OverflowError("seed=$seed does not fit in LFSR with register length of $LEN."))
ensure(len>=0, ArgumentError("Invalid sequence length. len must be non-negative"))
poly = UInt64(MAXLFSR_POLYNOMIAL[LEN])
mask = ~poly
#==Since `1 XNOR A => A`, we can ignore all taps that are not part of the
polynomial, simply by forcing all non-tap bits to 1 (OR-ing with `~poly`)
Thus, we build a mask from `~poly`.
==#
return MaxLFSR_Iter{LEN, output}(seed, mask, len)
end
#==Iterator interface:
===============================================================================#
Base.length(iter::MaxLFSR_Iter) = iter.len
Base.eltype(iter::MaxLFSR_Iter{LEN, TRESULT}) where {LEN, TRESULT} = TRESULT
Iterators.IteratorSize(iter::MaxLFSR_Iter) = Base.HasLength()
function Iterators.iterate(iter::MaxLFSR_Iter{LEN, TRESULT}, state::MaxLFSR_State{LEN}) where {LEN, TRESULT}
if state.bitsrmg < 1
return nothing
end
bit = _nextbit(state, iter.mask)
return (TRESULT(bit), state)
end
function Iterators.iterate(iter::MaxLFSR_Iter{LEN}) where LEN
state = MaxLFSR_State{LEN}(iter.seed, iter.len)
return iterate(iter, state)
end
#==High-level interface
===============================================================================#
"""
sequence(t::SequenceGenerator; seed::Integer=11, len::Int=-1, output::DataType=Int)
Create an iterable object that defines a bit sequence of length `len`.
Inputs:
- t: Instance defining type of algorithm used to generate bit sequence.
- seed: Initial value of register used to build sequence.
- len: Number of bits in sequence.
- output: DataType used for sequence elements (typical values are `Int` or `Bool`).
Example returning the first `1000` bits of a PRBS-`31` pattern constructed with the Maximum-length LFSR algorithm seeded with an initial register value of `11`.:
pattern = collect(sequence(MaxLFSR(31), seed=11, len=1000, output=Bool)).
"""
sequence(t::MaxLFSR; seed::Integer=11, len::Int=-1, output::DataType=Int) =
_sequence(t, UInt64(seed), len, output)
"""
sequence_detecterrors(t::SequenceGenerator, v::Array)
Tests validity of bit sequence using sequence generator algorithm.
NOTE: Seeded from first bits of sequence in v.
"""
function sequence_detecterrors(t::MaxLFSR{LEN}, v::Vector{T}) where {LEN, T<:Number}
ensure(length(v) > LEN, ArgumentError("Pattern vector too short to test validity (must be at least > $LEN)"))
if Bool == T
#Will be valid
else
for i in 1:length(v)
ensure(v[i]>=0 && v[i]<=1, ArgumentError("Sequence value not ∉ [0,1] @ index $i.)"))
end
end
#Build seed register for Max LFSR algorithm:
_errors = similar(v)
seed = UInt64(0)
for i in 1:LEN
seed = (seed << 1) | UInt64(v[i])
_errors[i] = 0
end
#Test for errors in remaining sequence:
iter = _sequence(t, seed, length(v)-LEN, T)
state = MaxLFSR_State{LEN}(iter.seed, iter.len)
for i in (LEN+1):length(_errors)
(b, state) = iterate(iter, state)
_errors[i] = convert(T, b!=v[i])
end
return _errors
end
#Last line
| PhysicalCommunications | https://github.com/JuliaTelecom/PhysicalCommunications.jl.git |
|
[
"MIT"
] | 0.1.1 | e15ba568ac49d1ebc98578ed3bdd093a25ee7f2d | code | 2000 | #Eye Diagram Tests
@testset "Eye Diagram Tests" begin #Scope for test data
testbits = [0,1,0,1,0,1,1,0,1,0,0,1,1,1,0,0,0,1,0]
#@show length(testbits)
#Simple function to build a triangular bit pattern from a bit sequence
function BuildTriangPat(bitseq; tbit::Float64=1.0)
y = 1.0 .* bitseq #Get floating point values
x = collect((0:1:(length(bitseq)-1)) .* tbit)
return (x, y)
end
#For debug purposes:
function dbg_showeyedata(eyedata)
for i in 1:length(eyedata.vx)
println("$i:")
@show eyedata.vx[i]
@show eyedata.vy[i]
end
end
tbit = 1.0
(x,y) = BuildTriangPat(testbits, tbit = tbit)
@testset "Eye Diagram: Centered" begin
eyedata = buildeye(x, y, tbit, 2.0*tbit, tstart=0.0*tbit)
@test length(eyedata.vx) == length(eyedata.vy)
@test length(eyedata.vx) == length(testbits) - 1
# dbg_showeyedata(eyedata)
for i in 1:(length(eyedata.vx)-1)
@test eyedata.vx[i] == [0.0, 1.0, 2.0]
@test eyedata.vy[i] == Float64[testbits[i], testbits[i+1], testbits[i+2]]
end
i = length(eyedata.vx)
@test eyedata.vx[end] == [0.0, 1.0] #Cannot extrapolate past last data point
@test eyedata.vy[i] == Float64[testbits[i], testbits[i+1]]
end
@testset "Eye Diagram: Early" begin
eyedata = buildeye(x, y, tbit, 2.0*tbit, tstart=0.8*tbit)
@test length(eyedata.vx) == length(eyedata.vy)
@test length(eyedata.vx) == length(testbits) - 2 #Skipped 1st data point
# dbg_showeyedata(eyedata)
for i in 1:length(eyedata.vx)
@test eyedata.vx[i] ≈ [0.2, 1.2]
@test eyedata.vy[i] == Float64[testbits[i+1], testbits[i+2]]
end
end
@testset "Eye Diagram: Late" begin
eyedata = buildeye(x, y, tbit, 2.0*tbit, tstart=0.2*tbit)
@test length(eyedata.vx) == length(eyedata.vy)
@test length(eyedata.vx) == length(testbits) - 2 #Skipped 1st data point
# dbg_showeyedata(eyedata)
for i in 1:length(eyedata.vx)
#Last data point in sweep should be >= 2.0*tbit:
@test eyedata.vx[i] ≈ [0.8, 1.8]
@test eyedata.vy[i] == Float64[testbits[i+1], testbits[i+2]]
end
end
end #"Eye Diagram Tests"
| PhysicalCommunications | https://github.com/JuliaTelecom/PhysicalCommunications.jl.git |
|
[
"MIT"
] | 0.1.1 | e15ba568ac49d1ebc98578ed3bdd093a25ee7f2d | code | 1667 | #Pseudo-Random Bit Sequence Tests
@testset "PRBS Tests" begin #Scope for test data
_DEBUG = false
#Dumps contents of a bit sequence vector for debugging purposes:
function dbg_dumpseq(prefix, seq)
if !_DEBUG; return; end
print(prefix)
for bit in seq
if bit > 0
print("1")
else
print("0")
end
end
println()
end
@testset "Cyclical tests: MaxLFSR" begin
for prbslen in 3:15 #Don't perform tests on higher-order patterns (too much time/memory)
patlen = (2^prbslen) - 1
genlen = patlen + prbslen #Full pattern length + "internal register length"
regmask = (Int64(1)<<prbslen) -1 #Mask out "non-register" bits
seed = rand(Int64) & regmask
pattern = collect(sequence(MaxLFSR(prbslen), seed=seed, len=genlen, output=Int))
#Make sure pattern repeats for at least as many bits as are in internal register:
mismatch = pattern[(1:prbslen)] .!= pattern[patlen .+ (1:prbslen)]
@test sum(convert(Vector{Int}, mismatch)) == 0
#@show prbslen, pattern[1:prbslen], pattern[patlen .+ (1:prbslen)]
end
end
@testset "Error detection: MaxLFSR" begin
#TODO: Compare algorithm against known good pattern (if one can be found).
prbslen = 15; seed = 11; len = 50
pattern = collect(sequence(MaxLFSR(prbslen), seed=seed, len=len, output=Int))
_errors = sequence_detecterrors(MaxLFSR(prbslen), pattern)
dbg_dumpseq("_errors = ", _errors) #DEBUG
@test sum(_errors) == 0
#Test with error injected:
pattern[prbslen+5] = 1 - pattern[prbslen+5] #Flip "bit"
_errors = sequence_detecterrors(MaxLFSR(prbslen), pattern)
dbg_dumpseq("_errors = ", _errors) #DEBUG
@test sum(_errors) == 1 #Should have a single error
end
end #PRBS Tests
#Last line
| PhysicalCommunications | https://github.com/JuliaTelecom/PhysicalCommunications.jl.git |
|
[
"MIT"
] | 0.1.1 | e15ba568ac49d1ebc98578ed3bdd093a25ee7f2d | code | 125 | using Test, PhysicalCommunications
testfiles = [ "prbs.jl", "eyediag.jl"]
for testfile in testfiles
include(testfile)
end
| PhysicalCommunications | https://github.com/JuliaTelecom/PhysicalCommunications.jl.git |
|
[
"MIT"
] | 0.1.1 | e15ba568ac49d1ebc98578ed3bdd093a25ee7f2d | docs | 2642 | # PhysicalCommunications.jl
[](https://travis-ci.org/ma-laforge/PhysicalCommunications.jl)
## Description
PhysicalCommunications.jl provides tools for the development & test of the physical communication layer (typically implemented in the "PHY" chip).
### Eye Diagrams
| <img src="https://github.com/ma-laforge/FileRepo/blob/master/SignalProcessing/sampleplots/demo7.png" width="850"> |
| :---: |
- **`buildeye()`**: Builds an eye diagram by folding `x` values of provided `(x,y)` into multiple windows of `teye` that start (are "triggered") every `tbit`:
- `buildeye(x::Vector, y::Vector, tbit::Number, teye::Number; tstart::Number=0)`
Example plotting with Plots.jl:
```
#Assumption: (x, y) data generated here.
tbit = 1e-9 #Assume data bit period is 1ns.
#Build eye & use tstart to center data.
eye = buildeye(x, y, tbit, 2.0*tbit, tstart=0.2*tbit)
plot(eye.vx, eye.vy)
```
### Test Patterns
The PhysicalCommunications.jl module provides the means to create pseudo-random bit sequences to test/validate channel performance:
Example creation of PRBS pattern using maximum-length Linear-Feedback Shift Register (LFSR):
```
pattern = collect(sequence(MaxLFSR(31), seed=11, len=1000, output=Bool)).
```
Example validation of maximum-length LFSR sequence:
```
_errors = sequence_detecterrors(MaxLFSR(31), pattern)
```
#### Test Patterns: Supported Sequence Generators (Types)
- **`SequenceGenerator`** (abstract type): Defines algorithm used by sequence() to create a bit sequence.
- **`PRBSGenerator <: SequenceGenerator`** (abstract type): Specifically a pseudo-random bit sequence.
- **`MaxLFSR{LEN} <: PRBSGenerator`**: Identifies a "Maximum-Length LFSR" algorithm.
- Reference: Alfke, Efficient Shift Registers, LFSR Counters, and Long Pseudo-Random Sequence Generators, Xilinx, XAPP 052, v1.1, 1996.
- **`MaxLFSR_Iter{LEN,TRESULT}`: "Iterator" object for MaxLFSR sequence generator.
- Must `collect(::MaxLFSR_Iter)` to obtain sequence values.
#### Test Patterns: Iterable API
- **`sequence()`**: Create an iterable object that defines a bit sequence of length `len`..
- `sequence(t::SequenceGenerator; seed::Integer=11, len::Int=-1, output::DataType=Int)`
- Must use `collect(sequence([...]))` to obtain actual sequence values.
## Compatibility
Extensive compatibility testing of PhysicalCommunications.jl has not been performed. The module has been tested using the following environment(s):
- Linux / Julia-1.1.1
## Disclaimer
The PhysicalCommunications.jl module is not yet mature. Expect significant changes.
| PhysicalCommunications | https://github.com/JuliaTelecom/PhysicalCommunications.jl.git |
|
[
"Apache-2.0"
] | 0.1.1 | c84370337e22f75dce100bae1aa9be53b44936c3 | code | 32686 | module DeltaArrays
using LinearAlgebra: LinearAlgebra, sym_uplo, AdjointAbsVec, TransposeAbsVec, AbstractTriangular, AbstractVecOrMat, HermOrSym, QRCompactWYQ, QRPackedQ, Diagonal, Symmetric, Hermitian, Tridiagonal, AdjOrTransAbsMat, Adjoint, Transpose, SymTridiagonal, UpperTriangular, LowerTriangular, UnitUpperTriangular, UnitLowerTriangular
import Core: Array
import Base: similar, copyto!, size, getindex, setindex!, parent, real, imag, iszero, isone, conj, conj!, adjoint, transpose, permutedims, inv, sum, kron, kron!, @propagate_inbounds, @invoke
import Base: -, +, ==, *, /, \, ^
import LinearAlgebra: ishermitian, issymmetric, isposdef, factorize, isdiag, diag, tr, det, logdet, logabsdet, pinv, eigvals, eigvecs, eigen, svdvals, svd, istriu, istril, triu!, tril!, lmul!, rmul!, ldiv!, rdiv!
export DeltaArray, delta, deltaind
function deltaind(A::AbstractArray)
Base.require_one_based_indexing(A)
deltaind(size(A)[1:end-1]...)
end
deltaind(n::Integer...) = range(1, step=sum(cumprod(n), init=1), length=minimum(n))
delta(i::Integer...) = allequal(i)
delta(A::AbstractArray) = A[deltaind(A)]
struct DeltaArray{T,N,V<:AbstractVector{T}} <: AbstractArray{T,N}
data::V
function DeltaArray{T,N,V}(values) where {T,N,V<:AbstractVector{T}}
Base.require_one_based_indexing(values)
new{T,N,V}(values)
end
end
DeltaArray{T,N,V}(D::DeltaArray) where {T,N,V<:AbstractVector{T}} = DeltaArray{T,N,V}(D.data)
delta(D::DeltaArray) = D.data
function Base.promote_rule(A::Type{<:DeltaArray{<:Any,N,V}}, B::Type{<:DeltaArray{<:Any,N,W}}) where {N,V,W}
X = promote_type(V, W)
T = eltype(X)
isconcretetype(T) && return DeltaArray{T,N,X}
return typejoin(A, B)
end
"""
DeltaArray(V::AbstractVector)
Construct a matrix with `V` as its diagonal.
See also [`delta`](@ref).
# Examples
```jldoctest
julia> DeltaArray([1, 10, 100])
3×3 DeltaArray{$Int, 2, Vector{$Int}}:
1 0 0
0 10 0
0 0 100
```
"""
DeltaArray(V::AbstractVector)
# `N`=2 by default, equivalent to diagonal
DeltaArray(v::AbstractVector{T}) where {T} = DeltaArray{T,2,typeof(v)}(v)
DeltaArray(d::Diagonal) = DeltaArray(diag(v))
DeltaArray{N}(v::AbstractVector{T}) where {T,N} = DeltaArray{T,N,typeof(v)}(v)
# TODO maybe add `DeltaArray{N}(d::Diagonal)?`
"""
DeltaArray(M::AbstractMatrix)
Constructs a matrix from the diagonal of `M`.
# Note
The resulting `DeltaArray` should in a similar way to a `Diagonal` object.
# Examples
```jldoctest
julia> A = permutedims(reshape(1:15, 5, 3))
3×5 Matrix{$Int}:
1 2 3 4 5
6 7 8 9 10
11 12 13 14 15
julia> DeltaArray(A)
3×3 DeltaArray{$Int, 2, Vector{$Int}}:
1 0 0
0 7 0
0 0 13
julia> delta(A)
3-element Vector{$Int}:
1
7
13
"""
DeltaArray(M::AbstractMatrix) = DeltaArray(diag(M))
"""
DeltaArray(A::AbstractArray)
Constructs an array from the diagonal of `A`.
# Examples
```jldoctest
julia> A = reshape(1:16, 2, 2, 2, 2)
2×2×2×2 reshape(::UnitRange{$Int}, 2, 2, 2, 2) with eltype $Int:
[:, :, 1, 1] =
1 3
2 4
[:, :, 2, 1] =
5 7
6 8
[:, :, 1, 2] =
9 11
10 12
[:, :, 2, 2] =
13 15
14 16
julia> DeltaArray(A)
2×2 DeltaArray{$Int, 2, Vector{Int64}}:
1 0
0 16
julia> delta(A)
2-element Vector{$Int}:
1
16
"""
DeltaArray(A::AbstractArray{<:Any,N}) where {N} = DeltaArray{N}(delta(A))
DeltaArray(D::DeltaArray) = D
DeltaArray{T}(D::DeltaArray) where {T} = D
DeltaArray{T,N}(D::DeltaArray) where {T,N} = DeltaArray{T,N}(D.data)
AbstractArray{T}(D::DeltaArray) where {T} = DeltaArray{T}(D)
AbstractArray{T,N}(D::DeltaArray) where {T,N} = DeltaArray{T,N}(D)
Array(D::DeltaArray{T,N}) where {T,N} = Array{promote_type(T, typeof(zero(T)))}(D)
function Array{T,N}(D::DeltaArray) where {T,N}
n = size(D, 1)
B = zeros(T, ntuple(_ -> n, N))
@inbounds for i in 1:n
# TODO revise if `ntuple` is performance optimal
# alternative could be to use `B[delta(B)] .= D.data`
B[ntuple(_ -> i, N)...] = D.data[i]
end
return B
end
"""
DeltaArray{T,N}(undef,n)
Construct an uninitialized `DeltaArray{T,N}` of order `N` and length `n`.
"""
DeltaArray{T,N}(::UndefInitializer, n::Integer) where {T,N} = DeltaArray{N}(Vector{T}(undef, n))
similar(D::DeltaArray, ::Type{T}) where {T} = DeltaArray(similar(D.data, T))
similar(::DeltaArray, ::Type{T}, dims::Tuple{Vararg{Int,N}}) where {T, N} = zeros(T, dims...)
copyto!(D1::DeltaArray, D2::DeltaArray) = (copyto!(D1.data, D2.data); D1)
__nvalues(D::DeltaArray) = length(D.data)
size(D::DeltaArray{<:Any,N}) where {N} = ntuple(_ -> __nvalues(D), N)
# TODO put type to i... to be `Integer`?
@inline function getindex(D::DeltaArray, i::Int...)
@boundscheck checkbounds(D, i...)
if allequal(i)
@inbounds r = D.data[first(i)]
else
r = deltazero(D, i...)
end
r
end
deltazero(::DeltaArray{T}, i...) where {T} = zero(T)
deltazero(D::DeltaArray{<:AbstractArray{T,N}}, i...) where {T,N} = zeros(T, (size(D.data[j], n) for (j, n) in zip(i, 1:N))...)
# TODO put type to i... to be `Integer`?
function setindex!(D::DeltaArray, v, i::Int...)
@boundscheck checkbounds(D, i...)
if allequal(i)
@inbounds D.data[first(i)] = v
elseif !iszero(v)
throw(ArgumentError("cannot set off-diagonal entry ($(i...)) to a nonzero value ($v)"))
end
return v
end
# NOTE not working/used currently
## structured matrix methods ##
# function Base.replace_in_print_matrix(D::DeltaArray{<:Any,2}, i::Integer, j::Integer, s::AbstractString)
# allequal(i) ? s : Base.replace_with_centered_mark(s)
# end
parent(D::DeltaArray) = D.data
# NOTE `DeltaArrays` are always symmetric because they are invariant under permutations of its dims
ishermitian(D::DeltaArray{<:Real}) = true
ishermitian(D::DeltaArray{<:Number}) = isreal(D.data)
ishermitian(D::DeltaArray) = all(ishermitian, D.data)
issymmetric(D::DeltaArray{<:Number}) = true
issymmetric(D::DeltaArray) = all(issymmetric, D.data)
isposdef(D::DeltaArray) = all(isposdef, D.data)
factorize(D::DeltaArray) = D
real(D::DeltaArray{<:Any,N}) where {N} = DeltaArray{N}(real(D.data))
imag(D::DeltaArray{<:Any,N}) where {N} = DeltaArray{N}(imag(D.data))
iszero(D::DeltaArray) = all(iszero, D.data)
isone(D::DeltaArray) = all(isone, D.data)
isdiag(D::DeltaArray) = all(isdiag, D.data)
isdiag(D::DeltaArray{<:Number}) = true
istriu(D::DeltaArray{<:Any,2}, k::Integer=0) = k <= 0 || iszero(D.data) ? true : false
istril(D::DeltaArray{<:Any,2}, k::Integer=0) = k >= 0 || iszero(D.data) ? true : false
function triu!(D::DeltaArray{T,2}, k::Integer=0) where {T}
n = size(D, 1)
if !(-n + 1 <= k <= n + 1)
throw(ArgumentError("the requested diagonal, $k, must be at least $(-n + 1) and at most $(n + 1) in an $n-by-$n matrix"))
elseif k > 0
fill!(D.data, zero(T))
end
return D
end
function tril!(D::DeltaArray{T,2}, k::Integer=0) where {T}
n = size(D, 1)
if !(-n + 1 <= k <= n + 1)
throw(ArgumentError("the requested diagonal, $k, must be at least $(-n + 1) and at most $(n + 1) in an $n-by-$n matrix"))
elseif k < 0
fill!(D.data, zero(T))
end
return D
end
# NOTE the following method is not well defined and is susceptible for change
function (==)(Da::DeltaArray, Db::DeltaArray)
@boundscheck ndims(Da) != ndims(Db) && throw(DimensionMismatch("a has dims $(ndims(Da)) and b has $(ndims(Db))"))
return Da.data == Db.data
end
(-)(D::DeltaArray{<:Any,N}) where {N} = DeltaArray{N}(-D.data)
# NOTE the following method is not well defined and is susceptible for change
function (+)(Da::DeltaArray, Db::DeltaArray)
@boundscheck ndims(Da) != ndims(Db) && throw(DimensionMismatch("a has dims $(ndims(Da)) and b has $(ndims(Db))"))
return DeltaArray{ndims(Da)}(Da.data + Db.data)
end
# NOTE the following method is not well defined and is susceptible for change
function (-)(Da::DeltaArray, Db::DeltaArray)
@boundscheck ndims(Da) != ndims(Db) && throw(DimensionMismatch("a has dims $(ndims(Da)) and b has $(ndims(Db))"))
return DeltaArray{ndims(Da)}(Da.data - Db.data)
end
for f in (:+, :-)
@eval function $f(D::DeltaArray{<:Any,2}, S::Symmetric)
return Symmetric($f(D, S.data), sym_uplo(S.uplo))
end
@eval function $f(S::Symmetric, D::DeltaArray{<:Any,2})
return Symmetric($f(S.data, D), sym_uplo(S.uplo))
end
@eval function $f(D::DeltaArray{<:Real,2}, H::Hermitian)
return Hermitian($f(D, H.data), sym_uplo(H.uplo))
end
@eval function $f(H::Hermitian, D::DeltaArray{<:Real,2})
return Hermitian($f(H.data, D), sym_uplo(H.uplo))
end
end
(*)(x::Number, D::DeltaArray{<:Any,N}) where {N} = DeltaArray{N}(x * D.data)
(*)(D::DeltaArray{<:Any,N}, x::Number) where {N} = DeltaArray{N}(D.data * x)
(/)(D::DeltaArray{<:Any,N}, x::Number) where {N} = DeltaArray{N}(D.data / x)
(\)(x::Number, D::DeltaArray{<:Any,N}) where {N} = DeltaArray{N}(x \ D.data)
(^)(D::DeltaArray{<:Any,N}, a::Number) where {N} = DeltaArray{N}(D.data .^ a)
(^)(D::DeltaArray{<:Any,N}, a::Real) where {N} = DeltaArray{N}(D.data .^ a) # for disambiguation
(^)(D::DeltaArray{<:Any,N}, a::Integer) where {N} = DeltaArray{N}(D.data .^ a) # for disambiguation
Base.literal_pow(::typeof(^), D::DeltaArray{<:Any,N}, valp::Val) where {N} = DeltaArray{N}(Base.literal_pow.(^, D.data, valp)) # for speed
Base.literal_pow(::typeof(^), D::DeltaArray, valp::Val{-1}) = inv(D) # for disambiguation
function checkmulsize(A, B)
nA = size(A, 2)
mB = size(B, 1)
nA == mB || throw(DimensionMismatch("second dimension of A, $nA, does not match first dimension of B, $mB"))
return nothing
end
checksizeout(C, ::DeltaArray{<:Any,2}, A) = checksizeout(C, A)
checksizeout(C, A, ::DeltaArray{<:Any,2}) = checksizeout(C, A)
checksizeout(C, A::DeltaArray{<:Any,2}, ::DeltaArray{<:Any,2}) = checksizeout(C, A)
function checksizeout(C, A)
szA = size(A)
szC = size(C)
szA == szC || throw(DimensionMismatch("output matrix has size: $szC, but should have size $szA"))
return nothing
end
checkmulsize(C, A, B) = (checkmulsize(A, B); checksizeout(C, A, B))
function (*)(Da::DeltaArray{<:Any,2}, Db::DeltaArray{<:Any,2})
checkmulsize(Da, Db)
return DeltaArray{2}(Da.data .* Db.data)
end
function (*)(Da::DeltaArray{<:Any,2}, V::AbstractVector)
checkmulsize(Da, V)
return D.data .* V
end
(*)(A::AbstractMatrix, D::DeltaArray) = mul!(similar(A, Base.promote_op(*, eltype(A), eltype(D)), size(A)), A, D)
(*)(D::DeltaArray, A::AbstractMatrix) = mul!(similar(A, promote_op(*, eltype(A), eltype(D)), size(A)), D, A)
rmul!(A::AbstractMatrix, D::DeltaArray{<:Any,2}) = @inline mul!(A, A, D)
lmul!(D::DeltaArray{<:Any,2}, B::AbstractVecOrMat) = @inline mul!(B, D, B)
function *(A::AdjOrTransAbsMat, D::DeltaArray{<:Any,2})
Ac = LinearAlgebra.copy_similar(A, Base.promote_op(*, eltype(A), eltype(D.data)))
rmul!(Ac, D)
end
*(D::DeltaArray{<:Any,2}, adjQ::Adjoint{<:Any,<:Union{QRCompactWYQ,QRPackedQ}}) =
rmul!(Array{promote_type(eltype(D), eltype(adjQ))}(D), adjQ)
function *(D::DeltaArray{<:Any,2}, A::AdjOrTransAbsMat)
Ac = LinearAlgebra.copy_similar(A, Base.promote_op(*, eltype(A), eltype(D.data)))
lmul!(D, Ac)
end
@inline function __muldiag!(out, D::DeltaArray{<:Any,2}, B, alpha, beta)
Base.require_one_based_indexing(B)
Base.require_one_based_indexing(out)
if iszero(alpha)
LinearAlgebra._rmul_or_fill!(out, beta)
else
if iszero(beta)
@inbounds for j in axes(B, 2)
@simd for i in axes(B, 1)
out[i, j] = D.data[i] * B[i, j] * alpha
end
end
else
@inbounds for j in axes(B, 2)
@simd for i in axes(B, 1)
out[i, j] = D.data[i] * B[i, j] * alpha + out[i, j] * beta
end
end
end
end
return out
end
@inline function __muldiag!(out, A, D::DeltaArray{<:Any,2}, alpha, beta)
Base.require_one_based_indexing(A)
Base.require_one_based_indexing(out)
if iszero(alpha)
LinearAlgebra._rmul_or_fill!(out, beta)
else
if iszero(beta)
@inbounds for j in axes(A, 2)
dja = D.data[j] * alpha
@simd for i in axes(A, 1)
out[i, j] = A[i, j] * dja
end
end
else
@inbounds for j in axes(A, 2)
dja = D.data[j] * alpha
@simd for i in axes(A, 1)
out[i, j] = A[i, j] * dja + out[i, j] * beta
end
end
end
end
return out
end
@inline function __muldiag!(out::DeltaArray{<:Any,2}, D1::DeltaArray{<:Any,2}, D2::DeltaArray{<:Any,2}, alpha, beta)
d1 = D1.data
d2 = D2.data
if iszero(alpha)
LinearAlgebra._rmul_or_fill!(out.data, beta)
else
if iszero(beta)
@inbounds @simd for i in eachindex(out.data)
out.data[i] = d1[i] * d2[i] * alpha
end
else
@inbounds @simd for i in eachindex(out.data)
out.data[i] = d1[i] * d2[i] * alpha + out.data[i] * beta
end
end
end
return out
end
@inline function __muldiag!(out, D1::DeltaArray{<:Any,2}, D2::DeltaArray{<:Any,2}, alpha, beta)
Base.require_one_based_indexing(out)
mA = size(D1, 1)
d1 = D1.data
d2 = D2.data
LinearAlgebra._rmul_or_fill!(out, beta)
if !iszero(alpha)
@inbounds @simd for i in 1:mA
out[i, i] += d1[i] * d2[i] * alpha
end
end
return out
end
@inline function _muldiag!(out, A, B, alpha, beta)
checksizeout(out, A, B)
__muldiag!(out, A, B, alpha, beta)
return out
end
function (*)(Da::DeltaArray{<:Any,2}, A::AbstractMatrix, Db::DeltaArray{<:Any,2})
return broadcast(*, Da.data, A, permutedims(Db.data))
end
# Get ambiguous method if try to unify AbstractVector/AbstractMatrix here using AbstractVecOrMat
@inline mul!(out::AbstractVector, D::DeltaArray{<:Any,2}, V::AbstractVector, alpha::Number, beta::Number) = _muldiag!(out, D, V, alpha, beta)
@inline mul!(out::AbstractMatrix, D::DeltaArray{<:Any,2}, B::AbstractMatrix, alpha::Number, beta::Number) = _muldiag!(out, D, B, alpha, beta)
@inline mul!(out::AbstractMatrix, D::DeltaArray{<:Any,2}, B::Adjoint{<:Any,<:AbstractVecOrMat}, alpha::Number, beta::Number) = _muldiag!(out, D, B, alpha, beta)
@inline mul!(out::AbstractMatrix, D::DeltaArray{<:Any,2}, B::Transpose{<:Any,<:AbstractVecOrMat}, alpha::Number, beta::Number) = _muldiag!(out, D, B, alpha, beta)
@inline mul!(out::AbstractMatrix, A::AbstractMatrix, D::DeltaArray{<:Any,2}, alpha::Number, beta::Number) = _muldiag!(out, A, D, alpha, beta)
@inline mul!(out::AbstractMatrix, A::Adjoint{<:Any,<:AbstractVecOrMat}, D::DeltaArray{<:Any,2}, alpha::Number, beta::Number) = _muldiag!(out, A, D, alpha, beta)
@inline mul!(out::AbstractMatrix, A::Transpose{<:Any,<:AbstractVecOrMat}, D::DeltaArray{<:Any,2}, alpha::Number, beta::Number) = _muldiag!(out, A, D, alpha, beta)
@inline mul!(C::DeltaArray{<:Any,2}, Da::DeltaArray{<:Any,2}, Db::DeltaArray{<:Any,2}, alpha::Number, beta::Number) = _muldiag!(C, Da, Db, alpha, beta)
mul!(C::AbstractMatrix, Da::DeltaArray{<:Any,2}, Db::DeltaArray{<:Any,2}, alpha::Number, beta::Number) = _muldiag!(C, Da, Db, alpha, beta)
/(A::AbstractVecOrMat, D::DeltaArray{<:Any,2}) = _rdiv!(similar(A, LinearAlgebra._init_eltype(/, eltype(A), eltype(D))), A, D)
/(A::HermOrSym, D::DeltaArray{<:Any,2}) = _rdiv!(similar(A, LinearAlgebra._init_eltype(/, eltype(A), eltype(D)), size(A)), A, D)
rdiv!(A::AbstractVecOrMat, D::DeltaArray{<:Any,2}) = @inline _rdiv!(A, A, D)
# avoid copy when possible via internal 3-arg backend
function _rdiv!(B::AbstractVecOrMat, A::AbstractVecOrMat, D::DeltaArray{<:Any,2})
require_one_based_indexing(A)
dd = D.data
m, n = size(A, 1), size(A, 2)
if (k = length(dd)) != n
throw(DimensionMismatch("left hand side has $n columns but D is $k by $k"))
end
@inbounds for j in 1:n
ddj = dd[j]
iszero(ddj) && throw(SingularException(j))
for i in 1:m
B[i, j] = A[i, j] / ddj
end
end
B
end
function (\)(D::DeltaArray{<:Any,2}, B::AbstractVector)
j = findfirst(iszero, D.data)
isnothing(j) || throw(SingularException(j))
return D.data .\ B
end
\(D::DeltaArray{<:Any,2}, B::AbstractMatrix) = ldiv!(similar(B, LinearAlgebra._init_eltype(\, eltype(D), eltype(B))), D, B)
\(D::DeltaArray{<:Any,2}, B::HermOrSym) = ldiv!(similar(B, LinearAlgebra._init_eltype(\, eltype(D), eltype(B)), size(B)), D, B)
ldiv!(D::DeltaArray{<:Any,2}, B::AbstractVecOrMat) = @inline ldiv!(B, D, B)
function ldiv!(B::AbstractVecOrMat, D::DeltaArray{<:Any,2}, A::AbstractVecOrMat)
require_one_based_indexing(A, B)
dd = D.data
d = length(dd)
m, n = size(A, 1), size(A, 2)
m′, n′ = size(B, 1), size(B, 2)
m == d || throw(DimensionMismatch("right hand side has $m rows but D is $d by $d"))
(m, n) == (m′, n′) || throw(DimensionMismatch("expect output to be $m by $n, but got $m′ by $n′"))
j = findfirst(iszero, D.data)
isnothing(j) || throw(SingularException(j))
@inbounds for j = 1:n, i = 1:m
B[i, j] = dd[i] \ A[i, j]
end
B
end
# Optimizations for \, / between DeltaArrays
\(D::DeltaArray{<:Any,2}, B::DeltaArray{<:Any,2}) = ldiv!(similar(B, Base.promote_op(\, eltype(D), eltype(B))), D, B)
/(A::DeltaArray{<:Any,2}, D::DeltaArray{<:Any,2}) = _rdiv!(similar(A, Base.promote_op(/, eltype(A), eltype(D))), A, D)
function _rdiv!(Dc::DeltaArray{<:Any,2}, Db::DeltaArray{<:Any,2}, Da::DeltaArray{<:Any,2})
n, k = length(Db.data), length(Da.data)
n == k || throw(DimensionMismatch("left hand side has $n columns but D is $k by $k"))
j = findfirst(iszero, Da.data)
isnothing(j) || throw(SingularException(j))
Dc.data .= Db.data ./ Da.data
Dc
end
ldiv!(Dc::DeltaArray{<:Any,2}, Da::DeltaArray{<:Any,2}, Db::DeltaArray{<:Any,2}) = DeltaArray{2}(ldiv!(Dc.data, Da, Db.data))
# optimizations for (Sym)Tridiagonal and DeltaArray
@propagate_inbounds _getudiag(T::Tridiagonal, i) = T.du[i]
@propagate_inbounds _getudiag(S::SymTridiagonal, i) = S.ev[i]
@propagate_inbounds _getdiag(T::Tridiagonal, i) = T.d[i]
@propagate_inbounds _getdiag(S::SymTridiagonal, i) = symmetric(S.dv[i], :U)::LinearAlgebra.symmetric_type(eltype(S.dv))
@propagate_inbounds _getldiag(T::Tridiagonal, i) = T.dl[i]
@propagate_inbounds _getldiag(S::SymTridiagonal, i) = transpose(S.ev[i])
function (\)(D::DeltaArray{<:Any,2}, S::SymTridiagonal)
T = Base.promote_op(\, eltype(D), eltype(S))
du = similar(S.ev, T, max(length(S.dv) - 1, 0))
d = similar(S.dv, T, length(S.dv))
dl = similar(S.ev, T, max(length(S.dv) - 1, 0))
ldiv!(Tridiagonal(dl, d, du), D, S)
end
(\)(D::DeltaArray{<:Any,2}, T::Tridiagonal) = ldiv!(similar(T, Base.promote_op(\, eltype(D), eltype(T))), D, T)
function ldiv!(T::Tridiagonal, D::DeltaArray{<:Any,2}, S::Union{SymTridiagonal,Tridiagonal})
m = size(S, 1)
dd = D.data
if (k = length(dd)) != m
throw(DimensionMismatch("diagonal matrix is $k by $k but right hand side has $m rows"))
end
if length(T.d) != m
throw(DimensionMismatch("target matrix size $(size(T)) does not match input matrix size $(size(S))"))
end
m == 0 && return T
j = findfirst(iszero, dd)
isnothing(j) || throw(SingularException(j))
ddj = dd[1]
T.d[1] = ddj \ _getdiag(S, 1)
@inbounds if m > 1
T.du[1] = ddj \ _getudiag(S, 1)
for j in 2:m-1
ddj = dd[j]
T.dl[j-1] = ddj \ _getldiag(S, j - 1)
T.d[j] = ddj \ _getdiag(S, j)
T.du[j] = ddj \ _getudiag(S, j)
end
ddj = dd[m]
T.dl[m-1] = ddj \ _getldiag(S, m - 1)
T.d[m] = ddj \ _getdiag(S, m)
end
return T
end
function (/)(S::SymTridiagonal, D::DeltaArray{<:Any,2})
T = Base.promote_op(\, eltype(D), eltype(S))
du = similar(S.ev, T, max(length(S.dv) - 1, 0))
d = similar(S.dv, T, length(S.dv))
dl = similar(S.ev, T, max(length(S.dv) - 1, 0))
_rdiv!(Tridiagonal(dl, d, du), S, D)
end
(/)(T::Tridiagonal, D::DeltaArray{<:Any,2}) = _rdiv!(similar(T, Base.promote_op(/, eltype(T), eltype(D))), T, D)
function _rdiv!(T::Tridiagonal, S::Union{SymTridiagonal,Tridiagonal}, D::DeltaArray{<:Any,2})
n = size(S, 2)
dd = D.data
if (k = length(dd)) != n
throw(DimensionMismatch("left hand side has $n columns but D is $k by $k"))
end
if length(T.d) != n
throw(DimensionMismatch("target matrix size $(size(T)) does not match input matrix size $(size(S))"))
end
n == 0 && return T
j = findfirst(iszero, dd)
isnothing(j) || throw(SingularException(j))
ddj = dd[1]
T.d[1] = _getdiag(S, 1) / ddj
@inbounds if n > 1
T.dl[1] = _getldiag(S, 1) / ddj
for j in 2:n-1
ddj = dd[j]
T.dl[j] = _getldiag(S, j) / ddj
T.d[j] = _getdiag(S, j) / ddj
T.du[j-1] = _getudiag(S, j - 1) / ddj
end
ddj = dd[n]
T.d[n] = _getdiag(S, n) / ddj
T.du[n-1] = _getudiag(S, n - 1) / ddj
end
return T
end
# Optimizations for [l/r]mul!, l/rdiv!, *, / and \ between Triangular and DeltaArray.
# These functions are generally more efficient if we calculate the whole data field.
# The following code implements them in a unified pattern to avoid missing.
@inline function _setdiag!(data, f, diag, diag′=nothing)
@inbounds for i in 1:length(diag)
data[i, i] = isnothing(diag′) ? f(diag[i]) : f(diag[i], diag′[i])
end
data
end
for Tri in (:UpperTriangular, :LowerTriangular)
UTri = Symbol(:Unit, Tri)
# 2 args
for (fun, f) in zip((:*, :rmul!, :rdiv!, :/), (:identity, :identity, :inv, :inv))
@eval $fun(A::$Tri, D::DeltaArray{<:Any,2}) = $Tri($fun(A.data, D))
@eval $fun(A::$UTri, D::DeltaArray{<:Any,2}) = $Tri(_setdiag!($fun(A.data, D), $f, D.data))
end
for (fun, f) in zip((:*, :lmul!, :ldiv!, :\), (:identity, :identity, :inv, :inv))
@eval $fun(D::DeltaArray{<:Any,2}, A::$Tri) = $Tri($fun(D, A.data))
@eval $fun(D::DeltaArray{<:Any,2}, A::$UTri) = $Tri(_setdiag!($fun(D, A.data), $f, D.data))
end
# 3-arg ldiv!
@eval ldiv!(C::$Tri, D::DeltaArray{<:Any,2}, A::$Tri) = $Tri(ldiv!(C.data, D, A.data))
@eval ldiv!(C::$Tri, D::DeltaArray{<:Any,2}, A::$UTri) = $Tri(_setdiag!(ldiv!(C.data, D, A.data), inv, D.data))
# 3-arg mul!: invoke 5-arg mul! rather than lmul!
@eval mul!(C::$Tri, A::Union{$Tri,$UTri}, D::DeltaArray{<:Any,2}) = mul!(C, A, D, true, false)
# 5-arg mul!
@eval @inline mul!(C::$Tri, D::DeltaArray{<:Any,2}, A::$Tri, α::Number, β::Number) = $Tri(mul!(C.data, D, A.data, α, β))
@eval @inline function mul!(C::$Tri, D::DeltaArray{<:Any,2}, A::$UTri, α::Number, β::Number)
iszero(α) && return LinearAlgebra._rmul_or_fill!(C, β)
diag′ = iszero(β) ? nothing : diag(C)
data = mul!(C.data, D, A.data, α, β)
$Tri(_setdiag!(data, MulAddMul(α, β), D.data, diag′))
end
@eval @inline mul!(C::$Tri, A::$Tri, D::DeltaArray{<:Any,2}, α::Number, β::Number) = $Tri(mul!(C.data, A.data, D, α, β))
@eval @inline function mul!(C::$Tri, A::$UTri, D::DeltaArray{<:Any,2}, α::Number, β::Number)
iszero(α) && return LinearAlgebra._rmul_or_fill!(C, β)
diag′ = iszero(β) ? nothing : diag(C)
data = mul!(C.data, A.data, D, α, β)
$Tri(_setdiag!(data, MulAddMul(α, β), D.data, diag′))
end
end
kron(A::DeltaArray{<:Any,N}, B::DeltaArray{<:Any,M}) where {N,M} = DeltaArray{N + M}(kron(A.data, B.data))
function kron(A::DeltaArray{<:Any,2}, B::SymTridiagonal)
kdv = kron(delta(A), B.dv)
# We don't need to drop the last element
kev = kron(delta(A), LinearAlgebra._pushzero(LinearAlgebra._evview(B)))
SymTridiagonal(kdv, kev)
end
function kron(A::DeltaArray{<:Any,2}, B::Tridiagonal)
# `_droplast!` is only guaranteed to work with `Vector`
kd = LinearAlgebra._makevector(kron(delta(A), B.d))
kdl = LinearAlgebra._droplast!(LinearAlgebra._makevector(kron(delta(A), LinearAlgebra._pushzero(B.dl))))
kdu = LinearAlgebra._droplast!(LinearAlgebra._makevector(kron(delta(A), LinearAlgebra._pushzero(B.du))))
Tridiagonal(kdl, kd, kdu)
end
@inline function kron!(C::AbstractMatrix, A::DeltaArray{<:Any,2}, B::AbstractMatrix)
require_one_based_indexing(B)
(mA, nA) = size(A)
(mB, nB) = size(B)
(mC, nC) = size(C)
@boundscheck (mC, nC) == (mA * mB, nA * nB) ||
throw(DimensionMismatch("expect C to be a $(mA * mB)x$(nA * nB) matrix, got size $(mC)x$(nC)"))
isempty(A) || isempty(B) || fill!(C, zero(A[1, 1] * B[1, 1]))
m = 1
@inbounds for j = 1:nA
A_jj = A[j, j]
for k = 1:nB
for l = 1:mB
C[m] = A_jj * B[l, k]
m += 1
end
m += (nA - 1) * mB
end
m += mB
end
return C
end
@inline function kron!(C::AbstractMatrix, A::AbstractMatrix, B::DeltaArray{<:Any})
require_one_based_indexing(A)
(mA, nA) = size(A)
(mB, nB) = size(B)
(mC, nC) = size(C)
@boundscheck (mC, nC) == (mA * mB, nA * nB) ||
throw(DimensionMismatch("expect C to be a $(mA * mB)x$(nA * nB) matrix, got size $(mC)x$(nC)"))
isempty(A) || isempty(B) || fill!(C, zero(A[1, 1] * B[1, 1]))
m = 1
@inbounds for j = 1:nA
for l = 1:mB
Bll = B[l, l]
for k = 1:mA
C[m] = A[k, j] * Bll
m += nB
end
m += 1
end
m -= nB
end
return C
end
conj(D::DeltaArray{<:Any,N}) where {N} = DeltaArray{N}(conj(D.data))
conj!(D::DeltaArray) = conj!(D.data)
transpose(D::DeltaArray{<:Number}) = D
transpose(D::DeltaArray{<:Any,N}) where {N} = DeltaArray{N}(transpose.(D.data))
adjoint(D::DeltaArray{<:Number}) = conj(D)
adjoint(D::DeltaArray{<:Any,N}) where {N} = DeltaArray{N}(adjoint.(D.data))
permutedims(D::DeltaArray) = D
permutedims(D::DeltaArray, perm) = (Base.checkdims_perm(D, D, perm); D)
function diag(D::DeltaArray{T,2}, k::Integer=0) where {T}
# every branch call similar(..., ::Int) to make sure the
# same vector type is returned independent of k
if k == 0
return copyto!(similar(D.data, length(D.data)), D.data)
elseif -size(D, 1) <= k <= size(D, 1)
return fill!(similar(D.data, size(D, 1) - abs(k)), zero(T))
else
throw(ArgumentError("requested diagonal, $k, must be at least $(-size(D, 1)) and at most $(size(D, 2)) for an $(size(D, 1))-by-$(size(D, 2)) matrix"))
end
end
tr(D::DeltaArray) = sum(tr, D.data)
det(D::DeltaArray) = prod(det, D.data)
function logdet(D::DeltaArray{<:Complex})
z = sum(log, D.data)
complex(real(z), rem2pi(imag(z), RoundNearest))
end
function logabsdet(A::DeltaArray)
mapreduce(x -> (log(abs(x)), sign(x)), ((d1, s1), (d2, s2)) -> (d1 + d2, s1 * s2), A.data)
end
for f in (:exp, :cis, :log, :sqrt,
:cos, :sin, :tan, :csc, :sec, :cot,
:cosh, :sinh, :tanh, :csch, :sech, :coth,
:acos, :asin, :atan, :acsc, :asec, :acot,
:acosh, :asinh, :atanh, :acsch, :asech, :acoth)
@eval Base.$f(D::DeltaArray{<:Any,N}) where {N} = DeltaArray{N}($f.(D.data))
end
function inv(D::DeltaArray{T,N}) where {T,N}
Di = similar(D.data, typeof(inv(oneunit(T))))
for i in 1:length(D.data)
if iszero(D.data[i])
throw(SingularException(i))
end
Di[i] = inv(D.data[i])
end
DeltaArray{N}(Di)
end
function pinv(D::DeltaArray{T,N}) where {T,N}
Di = similar(D.data, typeof(inv(oneunit(T))))
for i in 1:length(D.data)
if !iszero(D.data[i])
invD = inv(D.data[i])
if isfinite(invD)
Di[i] = invD
continue
end
end
# fallback
Di[i] = zero(T)
end
DeltaArray{N}(Di)
end
function pinv(D::DeltaArray{T,N}, tol::Real) where {T,N}
Di = similar(D.data, typeof(inv(oneunit(T))))
if !isempty(D.data)
maxabsD = maximum(abs, D.data)
for i in 1:length(D.data)
if abs(D.data[i]) > tol * maxabsD
invD = inv(D.data[i])
if isfinite(invD)
Di[i] = invD
continue
end
end
# fallback
Di[i] = zero(T)
end
end
DeltaArray{N}(Di)
end
# Eigensystem
eigvals(D::DeltaArray{<:Number,2}; permute::Bool=true, scale::Bool=true) = copy(D.data)
eigvals(D::DeltaArray{<:Any,2}; permute::Bool=true, scale::Bool=true) = eigvals.(D.data)
eigvecs(D::DeltaArray{<:Any,2}) = Matrix{eltype(D)}(I, size(D))
function eigen(D::DeltaArray{<:Any,2}; permute::Bool=true, scale::Bool=true, sortby::Union{Function,Nothing}=nothing)
if any(!isfinite, D.data)
throw(ArgumentError("matrix contains Infs or NaNs"))
end
Td = Base.promote_op(/, eltype(D), eltype(D))
λ = eigvals(D)
if !isnothing(sortby)
p = sortperm(λ; alg=QuickSort, by=sortby)
λ = λ[p]
end
evecs = Matrix{Td}(I, size(D))
Eigen(λ, evecs)
end
function eigen(Da::DeltaArray{<:Any,2}, Db::DeltaArray{<:Any,2}; sortby::Union{Function,Nothing}=nothing)
if any(!isfinite, Da.data) || any(!isfinite, Db.data)
throw(ArgumentError("matrices contain Infs or NaNs"))
end
if any(iszero, Db.data)
throw(ArgumentError("right-hand side diagonal matrix is singular"))
end
return GeneralizedEigen(eigen(Db \ Da; sortby)...)
end
function eigen(A::AbstractMatrix, D::DeltaArray{<:Any,2}; sortby::Union{Function,Nothing}=nothing)
if any(iszero, D.data)
throw(ArgumentError("right-hand side diagonal matrix is singular"))
end
if size(A, 1) == size(A, 2) && isdiag(A)
return eigen(DeltaArray(A), D; sortby)
elseif ishermitian(A)
S = promote_type(LinearAlgebra.eigtype(eltype(A)), eltype(D))
return eigen!(eigencopy_oftype(Hermitian(A), S), DeltaArray{S,2}(D); sortby)
else
S = promote_type(LinearAlgebra.eigtype(eltype(A)), eltype(D))
return eigen!(eigencopy_oftype(A, S), DeltaArray{S,2}(D); sortby)
end
end
# Singular system
svdvals(D::DeltaArray{<:Number,2}) = sort!(abs.(D.data), rev=true)
svdvals(D::DeltaArray{<:Any,2}) = [svdvals(v) for v in D.data]
function svd(D::DeltaArray{T,2}) where {T<:Number}
d = D.data
s = abs.(d)
piv = sortperm(s, rev=true)
S = s[piv]
Td = typeof(oneunit(T) / oneunit(T))
U = zeros(Td, size(D))
Vt = copy(U)
for i in 1:length(d)
j = piv[i]
U[j, i] = d[j] / S[i]
Vt[i, j] = one(Td)
end
return SVD(U, S, Vt)
end
# disambiguation methods: * and / of DeltaArray{<:Any,2} and Adj/Trans AbsVec
*(u::AdjointAbsVec, D::DeltaArray{<:Any,2}) = (D'u')'
*(u::TransposeAbsVec, D::DeltaArray{<:Any,2}) = transpose(transpose(D) * transpose(u))
*(x::AdjointAbsVec, D::DeltaArray{<:Any,2}, y::AbstractVector) = _mapreduce_prod(*, x, D, y)
*(x::TransposeAbsVec, D::DeltaArray{<:Any,2}, y::AbstractVector) = _mapreduce_prod(*, x, D, y)
/(u::AdjointAbsVec, D::DeltaArray{<:Any,2}) = (D' \ u')'
/(u::TransposeAbsVec, D::DeltaArray{<:Any,2}) = transpose(transpose(D) \ transpose(u))
# disambiguation methods: Call unoptimized version for user defined AbstractTriangular.
*(A::AbstractTriangular, D::DeltaArray{<:Any,2}) = @invoke *(A::AbstractMatrix, D::DeltaArray{<:Any,2})
*(D::DeltaArray{<:Any,2}, A::AbstractTriangular) = @invoke *(D::DeltaArray{<:Any,2}, A::AbstractMatrix)
dot(A::DeltaArray{<:Any,2}, B::DeltaArray{<:Any,2}) = dot(A.data, B.data)
function dot(D::DeltaArray{<:Any,2}, B::AbstractMatrix)
size(D) == size(B) || throw(DimensionMismatch("Matrix sizes $(size(D)) and $(size(B)) differ"))
return dot(D.data, view(B, deltaind(B)))
end
dot(A::AbstractMatrix, B::DeltaArray{<:Any,2}) = conj(dot(B, A))
function _mapreduce_prod(f, x, D::DeltaArray{<:Any,2}, y)
if !(length(x) == length(D.data) == length(y))
throw(DimensionMismatch("x has length $(length(x)), D has size $(size(D)), and y has $(length(y))"))
end
if isempty(x) && isempty(D) && isempty(y)
return zero(Base.promote_op(f, eltype(x), eltype(D), eltype(y)))
else
return mapreduce(t -> f(t[1], t[2], t[3]), +, zip(x, D.data, y))
end
end
# TODO cholesky
sum(A::DeltaArray) = sum(A.data)
sum(A::DeltaArray{<:Any,N}, dims::Integer) where {N} = N <= 1 ? sum(A.data) : DeltaArray{N - 1}(A.data)
function Base.muladd(A::DeltaArray{<:Any,2}, B::DeltaArray{<:Any,2}, z::DeltaArray{<:Any,2})
DeltaArray{2}(A.data .* B.data .+ z.data)
end
end | DeltaArrays | https://github.com/bsc-quantic/DeltaArrays.jl.git |
|
[
"Apache-2.0"
] | 0.1.1 | c84370337e22f75dce100bae1aa9be53b44936c3 | code | 549 | using DeltaArrays
@testset "isinteger and isreal" begin
@test all(isinteger, DeltaArray(rand(1:5, 5)))
@test isreal(DeltaArray(rand(5)))
end
@testset "unary ops" begin
let A = DeltaArray(rand(1:5, 5))
@test +(A) == A
@test *(A) == A
end
end
@testset "reverse dim on empty" begin
@test reverse(DeltaArray([]), dims=1) == DeltaArray([])
end
@testset "ndims and friends" begin
@test ndims(DeltaArray(rand(1:5, 5))) == 2
@test_skip ndims(DeltaArray{Float64}) == 2
@test_skip ndims(DeltaArray) == 2
end | DeltaArrays | https://github.com/bsc-quantic/DeltaArrays.jl.git |
|
[
"Apache-2.0"
] | 0.1.1 | c84370337e22f75dce100bae1aa9be53b44936c3 | code | 39 | using Test
include("abstractarray.jl") | DeltaArrays | https://github.com/bsc-quantic/DeltaArrays.jl.git |
|
[
"Apache-2.0"
] | 0.1.1 | c84370337e22f75dce100bae1aa9be53b44936c3 | docs | 384 | # DeltaArrays.jl
This Julia library provides `DeltaArray`, an efficient N-dimensional `Diagonal` array type. If your array $A$ is of the form
$$
A = a_i \delta_{i \dots j} = \begin{cases}
a_i, &\text{if} ~~ i=\dots=j \\
0, &\text{otherwise}
\end{cases}
$$
then it can be represented by a `DeltaArray`.
For compatibility, `DeltaArrays{T,2}` should just behave like `Diagonal{T}`. | DeltaArrays | https://github.com/bsc-quantic/DeltaArrays.jl.git |
|
[
"MIT"
] | 1.0.0 | 80dff3e6b98a4a79c59be4de03838110e4c88021 | code | 885 | using MambaModels
# Data
globe_toss = Dict{Symbol, Any}(
:w => [6, 7, 5, 6, 6],
:n => [9, 9, 9, 9, 9]
)
globe_toss[:N] = length(globe_toss[:w]);
# Model Specification
model = Model(
w = Stochastic(1,
(n, p, N) ->
UnivariateDistribution[Binomial(n[i], p) for i in 1:N],
false
),
p = Stochastic(() -> Beta(1, 1))
);
# Initial Values
inits = [
Dict(:w => globe_toss[:w], :n => globe_toss[:n], :p => 0.5),
Dict(:w => globe_toss[:w], :n => globe_toss[:n], :p => rand(Beta(1, 1)))
];
# Sampling Scheme
scheme = [NUTS(:p)]
setsamplers!(model, scheme);
# MCMC Simulations
chn = mcmc(model, globe_toss, inits, 10000, burnin=2500, thin=1, chains=2);
# Describe draws
describe(chn)
# Convert to MCMCChains.Chains object
chn2 = MCMCChains.Chains(chn.value, String.(chn.names))
# Describe the MCMCChains
MCMCChains.describe(chn2)
# End of `02/m2.1m.jl`
| MambaModels | https://github.com/StatisticalRethinkingJulia/MambaModels.jl.git |
|
[
"MIT"
] | 1.0.0 | 80dff3e6b98a4a79c59be4de03838110e4c88021 | code | 1441 | using MambaModels
## Data
line = Dict{Symbol, Any}()
df = DataFrame(CSV.read(joinpath(@__DIR__, "..", "..", "data", "Howell1.csv"),
delim=';'));
# Use only adults
df2 = filter(row -> row[:age] >= 18, df);
mean_weight = mean(df2[:, :weight])
df2[!, :weight_c] = convert(Vector{Float64}, df2[:, :weight]) .- mean_weight ;
line[:x] = df2[:, :weight_c];
line[:y] = df2[:, :height];
line[:xmat] = convert(Array{Float64, 2}, [ones(length(line[:x])) line[:x]])
# Model Specification
model = Model(
y = Stochastic(1,
(xmat, beta, s2) -> MvNormal(xmat * beta, sqrt(s2)),
false
),
beta = Stochastic(1, () -> MvNormal([178, 0], [sqrt(10000), sqrt(100)])),
s2 = Stochastic(() -> Uniform(0, 50))
)
# Initial Values
inits = [
Dict{Symbol, Any}(
:y => line[:y],
:beta => [rand(Normal(178, 100)), rand(Normal(0, 10))],
:s2 => rand(Uniform(0, 50))
)
for i in 1:3
]
# Tuning Parameters
scale1 = [0.5, 0.25]
summary1 = identity
eps1 = 0.5
scale2 = 0.5
summary2 = x -> [mean(x); sqrt(var(x))]
eps2 = 0.1
# Define sampling scheme
scheme = [
Mamba.NUTS([:beta]),
Mamba.Slice([:s2], 10)
]
setsamplers!(model, scheme)
# MCMC Simulation
chn = mcmc(model, line, inits, 10000, burnin=1000, chains=3)
# Show draws summary
describe(chn)
# Convert to MCMCChains.Chains object
chn2 = MCMCChains.Chains(chn.value, String.(chn.names))
# Describe the MCMCChains
MCMCChains.describe(chn2)
# End of `04/m4.1m.jl`
| MambaModels | https://github.com/StatisticalRethinkingJulia/MambaModels.jl.git |
|
[
"MIT"
] | 1.0.0 | 80dff3e6b98a4a79c59be4de03838110e4c88021 | code | 170 | module MambaModels
using Reexport
@reexport using Mamba
@reexport using CSV, DataFrames, Distributions
@reexport using MCMCChains, StatsFuns, StatsPlots
end # module
| MambaModels | https://github.com/StatisticalRethinkingJulia/MambaModels.jl.git |
|
[
"MIT"
] | 1.0.0 | 80dff3e6b98a4a79c59be4de03838110e4c88021 | code | 43 | using MambaModels
using Test
@test 1 == 1
| MambaModels | https://github.com/StatisticalRethinkingJulia/MambaModels.jl.git |
|
[
"MIT"
] | 1.0.0 | 80dff3e6b98a4a79c59be4de03838110e4c88021 | docs | 2573 | # MambaModels
| **Project Status** | **Documentation** | **Build Status** |
|:-------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------:|
|![][project-status-img] | [![][docs-stable-img]][docs-stable-url] [![][docs-dev-img]][docs-dev-url] | [![][travis-img]][travis-url] |
## Introduction
This package contains Julia versions of the mcmc models contained in the R package "rethinking" associated with the book [Statistical Rethinking](https://xcelab.net/rm/statistical-rethinking/) by Richard McElreath. It is part of the [StatisticalRethinkingJulia](https://github.com/StatisticalRethinkingJulia) Github organization of packages.
This package contains the [Mamba](https://github.com/brian-j-smith/Mamba.jl) versions,
## Documentation
- [**STABLE**][docs-stable-url] — **documentation of the most recently tagged version.**
- [**DEVEL**][docs-dev-url] — *documentation of the in-development version.*
## Acknowledgements
The documentation has been generated using Literate.jl and Documenter.jl based on several ideas demonstrated by Tamas Papp in [DynamicHMCExamples.jl](https://tpapp.github.io/DynamicHMCExamples.jl).
## Questions and issues
Question and contributions are very welcome, as are feature requests and suggestions. Please open an [issue][issues-url] if you encounter any problems or have a question.
[docs-dev-img]: https://img.shields.io/badge/docs-dev-blue.svg
[docs-dev-url]: https://statisticalrethinkingjulia.github.io/MambaModels.jl/latest
[docs-stable-img]: https://img.shields.io/badge/docs-stable-blue.svg
[docs-stable-url]: https://statisticalrethinkingjulia.github.io/MambaModels.jl/stable
[travis-img]: https://travis-ci.org/StatisticalRethinkingJulia/MambaModels.jl.svg?branch=master
[travis-url]: https://travis-ci.org/StatisticalRethinkingJulia/MambaModels.jl
[codecov-img]: https://codecov.io/gh/StatisticalRethinkingJulia/MambaModels.jl/branch/master/graph/badge.svg
[codecov-url]: https://codecov.io/gh/StatisticalRethinkingJulia/MambaModels.jl
[issues-url]: https://github.com/StatisticalRethinkingJulia/MambaModels.jl/issues
[project-status-img]: https://img.shields.io/badge/lifecycle-wip-orange.svg
| MambaModels | https://github.com/StatisticalRethinkingJulia/MambaModels.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 3805 | using Distributed
using OMEinsumContractionOrders, OMEinsum, CUDA
println("find $(length(devices())) GPU devices")
const procs = addprocs(length(devices())-nprocs()+1)
const gpus = collect(devices())
const process_device_map = Dict(zip(procs, gpus))
@show process_device_map
@everywhere begin # these packages/functions should be accessible on all processes
using OMEinsumContractionOrders, OMEinsum, CUDA
CUDA.allowscalar(false)
function do_work(f, jobs, results) # define work function everywhere
while true
job = take!(jobs)
@info "running $job on device $(Distributed.myid())"
res = f(job)
put!(results, res)
end
end
end
"""
multiprocess_run(func, inputs::AbstractVector)
Run `func` in parallel for a vector f `inputs`.
Returns a vector of results.
"""
function multiprocess_run(func, inputs::AbstractVector{T}) where T
n = length(inputs)
jobs = RemoteChannel(()->Channel{T}(n));
results = RemoteChannel(()->Channel{Any}(n));
for i in 1:n
put!(jobs, inputs[i])
end
for p in workers() # start tasks on the workers to process requests in parallel
remote_do(do_work, p, func, jobs, results)
end
return Any[take!(results) for i=1:n]
end
"""
multigpu_einsum(code::SlicedEinsum, xs::AbstractArray...; size_info = nothing, process_device_map::Dict)
Multi-GPU contraction of a sliced einsum specified by `code`.
Each time, the program take the slice and upload them to a specific GPU device and do the contraction.
Other arguments are
* `xs` are input tensors allocated in **main memory**,
* `size_info` specifies extra size information,
* `process_device_map` is a map between processes and GPU devices.
"""
function multigpu_einsum(se::SlicedEinsum{LT,ET}, @nospecialize(xs::AbstractArray...); size_info = nothing, process_device_map::Dict) where {LT, ET}
length(se.slicing) == 0 && return se.eins(xs...; size_info=size_info)
size_dict = size_info===nothing ? Dict{OMEinsum.labeltype(se),Int}() : copy(size_info)
OMEinsum.get_size_dict!(se, xs, size_dict)
it = OMEinsumContractionOrders.SliceIterator(se, size_dict)
res = OMEinsum.get_output_array(xs, getindex.(Ref(size_dict), it.iyv))
eins_sliced = OMEinsumContractionOrders.drop_slicedim(se.eins, se.slicing)
inputs = collect(enumerate([copy(x) for x in it]))
@info "start multiple process contraction!"
results = multiprocess_run(inputs) do (k, slicemap)
@info "computing slice $k/$(length(it))"
device!(process_device_map[Distributed.myid()])
xsi = ntuple(i->CuArray(OMEinsumContractionOrders.take_slice(xs[i], it.ixsv[i], slicemap)), length(xs))
Array(einsum(eins_sliced, xsi, it.size_dict_sliced))
end
# accumulate results to `res`
for (resi, (k, slicemap)) in zip(results, inputs)
OMEinsumContractionOrders.fill_slice!(res, it.iyv, resi, slicemap)
end
return res
end
# A using case
# ---------------------------------------
using Yao, YaoToEinsum, Yao.EasyBuild
# I. create a quantum circuit
nbit = 20
c = Yao.dispatch!(variational_circuit(nbit, 10), :random)
# II. convert a tensor network
# 1. specify input and output states as product states,
prod_state = Dict(zip(1:nbit, zeros(Int, nbit)))
# 2. convert the circuit to einsum code,
code, xs = YaoToEinsum.yao2einsum(c; initial_state=prod_state, final_state=prod_state)
# 3. optimize the contraction order
size_dict = OMEinsum.get_size_dict(getixsv(code), xs)
slicedcode = optimize_code(code, size_dict, TreeSA(; nslices=5), MergeGreedy())
# III. do the contraction on multiple GPUs in parallel
@info "time/space complexity is $(timespace_complexity(slicedcode, size_dict))"
multigpu_einsum(slicedcode, xs...; process_device_map=process_device_map)
| OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 5333 | ### A Pluto.jl notebook ###
# v0.19.9
using Markdown
using InteractiveUtils
# ╔═╡ c4f8bf2b-d07b-478d-bcd9-ed61ccd5abe3
using Pkg; Pkg.activate()
# ╔═╡ ee4e8d2a-04db-425a-af21-f5bc5ce6e26b
using OMEinsum
# ╔═╡ 8d093496-2fe3-419b-a88b-f4a7b7dadad8
using JunctionTrees
# ╔═╡ 744be89f-da21-455e-81b2-77278bd1ddaa
using Zygote, LinearAlgebra
# ╔═╡ 0d247952-0e51-11ed-3550-279e22cfc49b
md"# OMEinsum deep dive"
# ╔═╡ 9927eea0-64be-45cc-83e9-0e25eea1499d
md"## Specify the einsum contraction"
# ╔═╡ 553aa4bb-966c-4eba-94f5-d191fc99befe
tensor_network = ein"ab,bc,cd,def,f->"
# ╔═╡ 3fc61408-18a4-4679-8880-7fc7fff3acb6
md"One can extract inputs and output labels using `getixsv` and `getiy`"
# ╔═╡ 91660727-6a55-4e38-939d-3d27ec10cc3e
getixsv(tensor_network)
# ╔═╡ b93688af-3bb0-4a9b-81f9-125897310c49
getiyv(tensor_network)
# ╔═╡ 6197f512-0ab6-4975-b3d0-83eb72490d85
md"One can also construct `EinCode` programatically as"
# ╔═╡ e71fd0a9-e4fe-48a8-b685-c0971c7f8918
EinCode(getixsv(tensor_network), getiyv(tensor_network))
# ╔═╡ d255114a-fec8-406c-b710-20f53c682a3f
md"The label type does not has to be `Char`, it can be any type."
# ╔═╡ 64b77f52-d476-4f57-b9ad-493a031bf296
md"**Example: loading factor graph from an `uai` file**"
# ╔═╡ 55a597c3-1a74-45a0-9237-178990a7eb9d
md"In the following, we use tensor network for probabilistic modeling. The first step is loading a factor graph form an file of [UAI format](https://mroavi.github.io/JunctionTrees.jl/stable/file_formats/uai/#UAI-model-file-format). The loaded model is decribed in detail in the [documentation of JunctionTrees.jl](https://mroavi.github.io/JunctionTrees.jl/stable/usage/)."
# ╔═╡ 42ded005-d55a-423d-b642-2f142c579036
uai_folder = joinpath(pkgdir(JunctionTrees), "docs", "src", "problems", "asia")
# ╔═╡ 917d33c0-c3c2-4865-8842-60c7248f0a67
# outputs are (number of variables,
# cardinalities of each variable,
# number of cliques,
# the factors (labelled tensors) for the cliques)
nvars, cards, nclique, factors = JunctionTrees.read_uai_file(joinpath(uai_folder, "asia.uai"))
# ╔═╡ 3a5b2c4e-aa6b-4ac8-8f43-eb7e217a85cb
# outputs are observables and its values
obsvars, obsvals = [], []#JunctionTrees.read_uai_evid_file(joinpath(uai_folder, "asia.uai.evid"))
# ╔═╡ 5af37871-593c-43f2-acd1-45eb682ca0ac
md"The first 8 inputs labels are for unity tensors."
# ╔═╡ 896d755e-6a99-4f8d-a358-aa7714a036d5
factor_graph = EinCode([[[i] for i in 1:nvars]..., [[factor.vars...] for factor in factors]...], Int[]) # labels for edge tensors
# ╔═╡ da889e9e-4fb7-473b-908c-177e8e11c83d
fixedvertices=Dict(zip(obsvars, obsvals .- 1))
# ╔═╡ 07cb8edf-f009-428e-a0e3-4e9b30974ce4
size_dict = Dict([v=>(haskey(fixedvertices, v) ? 1 : 2) for v in 1:nvars])
# ╔═╡ 3505a5b2-6c20-4643-9937-4c1af27b7398
md"## Einsum with a specific contraction order"
# ╔═╡ 441fcd01-9b84-4389-adf7-d5eef09c0ebd
md"**Example: optimize the contraction order for inference**"
# ╔═╡ 58a3f8ab-0ecf-4024-9b35-d76ebaba2b32
optimized_factor_graph = optimize_code(factor_graph, size_dict, TreeSA(ntrials=1))
# ╔═╡ 7f728828-2710-46ca-ae54-a8dc512ecce7
md"prepare input tensors"
# ╔═╡ 7a105d4c-fd9d-4756-b6e2-93362b326219
clip_size(labels, fixedvertices) = [haskey(fixedvertices, v) ? (fixedvertices[v]+1:fixedvertices[v]+1) : Colon() for v in labels]
# ╔═╡ 65be0aaa-73bf-4120-a1d2-284c2cc7ad1d
tensors = [
[ones(haskey(fixedvertices, i) ? 1 : 2) for i=1:length(cards)]..., # unity tensors
[factor.vals[clip_size(factor.vars, fixedvertices)...]
for factor in factors]...
]
# ╔═╡ 6fcc8ec7-da6d-4198-a30f-6d792f2682a7
size.(tensors)
# ╔═╡ b1083a2c-3283-45f6-aaf7-36651643d8f8
getixsv(optimized_factor_graph)
# ╔═╡ f6414390-820f-435f-9d18-2bf2b499f268
optimized_factor_graph(tensors...)
# ╔═╡ 90f8f78a-f323-466c-b501-d95db4db76c7
marginals_raw = Zygote.gradient((args...)->optimized_factor_graph(args...)[], tensors...)
# ╔═╡ 0e89da78-08c0-49fb-8ec3-559345b32b0c
marginals_raw ./ sum.(marginals_raw)
# ╔═╡ Cell order:
# ╟─0d247952-0e51-11ed-3550-279e22cfc49b
# ╠═c4f8bf2b-d07b-478d-bcd9-ed61ccd5abe3
# ╠═ee4e8d2a-04db-425a-af21-f5bc5ce6e26b
# ╠═8d093496-2fe3-419b-a88b-f4a7b7dadad8
# ╟─9927eea0-64be-45cc-83e9-0e25eea1499d
# ╠═553aa4bb-966c-4eba-94f5-d191fc99befe
# ╟─3fc61408-18a4-4679-8880-7fc7fff3acb6
# ╠═91660727-6a55-4e38-939d-3d27ec10cc3e
# ╠═b93688af-3bb0-4a9b-81f9-125897310c49
# ╟─6197f512-0ab6-4975-b3d0-83eb72490d85
# ╠═e71fd0a9-e4fe-48a8-b685-c0971c7f8918
# ╟─d255114a-fec8-406c-b710-20f53c682a3f
# ╟─64b77f52-d476-4f57-b9ad-493a031bf296
# ╟─55a597c3-1a74-45a0-9237-178990a7eb9d
# ╠═42ded005-d55a-423d-b642-2f142c579036
# ╠═917d33c0-c3c2-4865-8842-60c7248f0a67
# ╠═3a5b2c4e-aa6b-4ac8-8f43-eb7e217a85cb
# ╟─5af37871-593c-43f2-acd1-45eb682ca0ac
# ╠═896d755e-6a99-4f8d-a358-aa7714a036d5
# ╠═da889e9e-4fb7-473b-908c-177e8e11c83d
# ╠═07cb8edf-f009-428e-a0e3-4e9b30974ce4
# ╟─3505a5b2-6c20-4643-9937-4c1af27b7398
# ╟─441fcd01-9b84-4389-adf7-d5eef09c0ebd
# ╠═58a3f8ab-0ecf-4024-9b35-d76ebaba2b32
# ╟─7f728828-2710-46ca-ae54-a8dc512ecce7
# ╠═7a105d4c-fd9d-4756-b6e2-93362b326219
# ╠═65be0aaa-73bf-4120-a1d2-284c2cc7ad1d
# ╠═6fcc8ec7-da6d-4198-a30f-6d792f2682a7
# ╠═b1083a2c-3283-45f6-aaf7-36651643d8f8
# ╠═f6414390-820f-435f-9d18-2bf2b499f268
# ╠═744be89f-da21-455e-81b2-77278bd1ddaa
# ╠═90f8f78a-f323-466c-b501-d95db4db76c7
# ╠═0e89da78-08c0-49fb-8ec3-559345b32b0c
| OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 321 | using OMEinsumContractionOrders, LuxorGraphPlot
eincode = OMEinsumContractionOrders.EinCode([['a', 'b'], ['a', 'c', 'd'], ['b', 'c', 'e', 'f'], ['e'], ['d', 'f']], ['a'])
viz_eins(eincode, filename = "eins.png")
nested_eins = optimize_code(eincode, uniformsize(eincode, 2), GreedyMethod())
viz_contraction(nested_eins) | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 1346 | module KaHyParExt
using OMEinsumContractionOrders: KaHyParBipartite, SparseMatrixCSC, group_sc, induced_subhypergraph, convert2int
import KaHyPar
import OMEinsumContractionOrders: bipartite_sc
using Suppressor: @suppress
function bipartite_sc(bipartiter::KaHyParBipartite, adj::SparseMatrixCSC, vertices, log2_sizes)
n_v = length(vertices)
subgraph, remaining_edges = induced_subhypergraph(adj, vertices)
hypergraph = KaHyPar.HyperGraph(subgraph, ones(n_v), convert2int(log2_sizes[remaining_edges]))
local parts
min_sc = 999999
for imbalance in bipartiter.imbalances
parts = @suppress KaHyPar.partition(hypergraph, 2; imbalance=imbalance, configuration=:edge_cut)
part0 = vertices[parts .== 0]
part1 = vertices[parts .== 1]
sc0, sc1 = group_sc(adj, part0, log2_sizes), group_sc(adj, part1, log2_sizes)
sc = max(sc0, sc1)
min_sc = min(sc, min_sc)
@debug "imbalance $imbalance: sc = $sc, group = ($(length(part0)), $(length(part1)))"
if sc <= bipartiter.sc_target
return part0, part1
end
end
error("fail to find a valid partition for `sc_target = $(bipartiter.sc_target)`, got minimum value `$min_sc` (imbalances = $(bipartiter.imbalances))")
end
@info "`OMEinsumContractionOrders` loads `KaHyParExt` extension successfully."
end | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 78 | module LuxorTensorPlot
include("LuxorTensorPlot/src/LuxorTensorPlot.jl")
end | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 452 | using OMEinsumContractionOrders, LuxorGraphPlot
using OMEinsumContractionOrders.SparseArrays
using LuxorGraphPlot.Graphs
using LuxorGraphPlot.Luxor
using LuxorGraphPlot.Luxor.FFMPEG
using OMEinsumContractionOrders: AbstractEinsum, NestedEinsum, SlicedEinsum
using OMEinsumContractionOrders: getixsv, getiyv
using OMEinsumContractionOrders: ein2hypergraph, ein2elimination
include("hypergraph.jl")
include("viz_eins.jl")
include("viz_contraction.jl") | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 3275 | struct LabeledHyperGraph{TS, TV, TE}
adjacency_matrix::SparseMatrixCSC{TS}
vertex_labels::Vector{TV}
edge_labels::Vector{TE}
open_edges::Vector{TE}
function LabeledHyperGraph(adjacency_matrix::SparseMatrixCSC{TS}; vl::Vector{TV} = [1:size(adjacency_matrix, 1)...], el::Vector{TE} = [1:size(adjacency_matrix, 2)...], oe::Vector = []) where{TS, TV, TE}
if size(adjacency_matrix, 1) != length(vl)
throw(ArgumentError("Number of vertices does not match number of vertex labels"))
end
if size(adjacency_matrix, 2) != length(el)
throw(ArgumentError("Number of edges does not match number of edge labels"))
end
if !all(oei in el for oei in oe)
throw(ArgumentError("Open edges must be in edge labels"))
end
if isempty(oe)
oe = Vector{TE}()
end
new{TS, TV, TE}(adjacency_matrix, vl, el, oe)
end
end
Base.show(io::IO, g::LabeledHyperGraph{TS, TV, TE}) where{TS,TV,TE} = print(io, "LabeledHyperGraph{$TS, $TV, $TE} \n adjacency_mat: $(g.adjacency_matrix) \n vertex: $(g.vertex_labels) \n edges: $(g.edge_labels)) \n open_edges: $(g.open_edges)")
Base.:(==)(a::LabeledHyperGraph, b::LabeledHyperGraph) = a.adjacency_matrix == b.adjacency_matrix && a.vertex_labels == b.vertex_labels && a.edge_labels == b.edge_labels && a.open_edges == b.open_edges
struct TensorNetworkGraph{TT, TI}
graph::SimpleGraph
tensors_labels::Dict{Int, TT}
indices_labels::Dict{Int, TI}
open_indices::Vector{TI}
function TensorNetworkGraph(graph::SimpleGraph; tl::Dict{Int, TT} = Dict{Int, Int}(), il::Dict{Int, TI} = Dict{Int, Int}(), oi::Vector = []) where{TT, TI}
if length(tl) + length(il) != nv(graph)
throw(ArgumentError("Number of tensors + indices does not match number of vertices"))
end
if !all(oii in values(il) for oii in oi)
throw(ArgumentError("Open indices must be in indices"))
end
if isempty(oi)
oi = Vector{TI}()
end
new{TT, TI}(graph, tl, il, oi)
end
end
Base.show(io::IO, g::TensorNetworkGraph{TT, TI}) where{TT, TI} = print(io, "TensorNetworkGraph{$TT, $TI} \n graph: {$(nv(g.graph)), $(ne(g.graph))} \n tensors: $(g.tensors_labels) \n indices: $(g.indices_labels)) \n open_indices: $(g.open_indices)")
# convert the labeled hypergraph to a tensor network graph, where vertices and edges of the hypergraph are mapped as the vertices of the tensor network graph, and the open edges are recorded.
function TensorNetworkGraph(lhg::LabeledHyperGraph{TS, TV, TE}) where{TS, TV, TE}
graph = SimpleGraph(length(lhg.vertex_labels) + length(lhg.edge_labels))
tensors_labels = Dict{Int, TV}()
indices_labels = Dict{Int, TE}()
lv = length(lhg.vertex_labels)
for i in 1:length(lhg.vertex_labels)
tensors_labels[i] = lhg.vertex_labels[i]
end
for i in 1:length(lhg.edge_labels)
indices_labels[i + lv] = lhg.edge_labels[i]
end
for i in 1:size(lhg.adjacency_matrix, 1)
for j in findall(!iszero, lhg.adjacency_matrix[i, :])
add_edge!(graph, i, j + lv)
end
end
TensorNetworkGraph(graph, tl=tensors_labels, il=indices_labels, oi=lhg.open_edges)
end | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 3905 | function OMEinsumContractionOrders.ein2elimination(code::NestedEinsum{T}) where{T}
elimination_order = Vector{T}()
_ein2elimination!(code, elimination_order)
return elimination_order
end
function OMEinsumContractionOrders.ein2elimination(code::SlicedEinsum{T, NestedEinsum{T}}) where{T}
elimination_order = Vector{T}()
_ein2elimination!(code.eins, elimination_order)
# the slicing indices are eliminated at the end
return vcat(elimination_order, code.slicing)
end
function _ein2elimination!(code::NestedEinsum{T}, elimination_order::Vector{T}) where{T}
if code.tensorindex == -1
for arg in code.args
_ein2elimination!(arg, elimination_order)
end
iy = unique(vcat(getiyv(code.eins)...))
for ix in unique(vcat(getixsv(code.eins)...))
if !(ix in iy) && !(ix in elimination_order)
push!(elimination_order, ix)
end
end
end
return elimination_order
end
function elimination_frame(gviz::GraphViz, tng::TensorNetworkGraph{TG, TL}, elimination_order::Vector{TL}, i::Int; filename = nothing) where{TG, TL}
gviz2 = deepcopy(gviz)
for j in 1:i
id = _get_key(tng.indices_labels, elimination_order[j])
gviz2.vertex_colors[id] = (0.5, 0.5, 0.5, 0.5)
end
return show_graph(gviz2, filename = filename)
end
function OMEinsumContractionOrders.viz_contraction(code::T, args...; kwargs...) where{T <: AbstractEinsum}
throw(ArgumentError("Only NestedEinsum and SlicedEinsum{T, NestedEinsum{T}} have contraction order"))
end
"""
viz_contraction(code::Union{NestedEinsum, SlicedEinsum}; locs=StressLayout(), framerate=10, filename=tempname() * ".mp4", show_progress=true)
Visualize the contraction process of a tensor network.
### Arguments
- `code`: The tensor network to visualize.
### Keyword Arguments
- `locs`: The coordinates or layout algorithm to use for positioning the nodes in the graph. Default is `StressLayout()`.
- `framerate`: The frame rate of the animation. Default is `10`.
- `filename`: The name of the output file, with `.gif` or `.mp4` extension. Default is a temporary file with `.mp4` extension.
- `show_progress`: Whether to show progress information. Default is `true`.
# Returns
- the path of the generated file.
"""
function OMEinsumContractionOrders.viz_contraction(
code::Union{NestedEinsum, SlicedEinsum};
locs=StressLayout(),
framerate = 10,
filename::String = tempname() * ".mp4",
show_progress::Bool = true)
# analyze the output format
@assert endswith(filename, ".gif") || endswith(filename, ".mp4") "Unsupported file format: $filename, only :gif and :mp4 are supported"
tempdirectory = mktempdir()
# generate the frames
elimination_order = ein2elimination(code)
tng = TensorNetworkGraph(ein2hypergraph(code))
gviz = GraphViz(tng, locs)
le = length(elimination_order)
for i in 0:le
show_progress && @info "Frame $(i + 1) of $(le + 1)"
fig_name = joinpath(tempdirectory, "$(lpad(i+1, 10, "0")).png")
elimination_frame(gviz, tng, elimination_order, i; filename = fig_name)
end
if endswith(filename, ".gif")
Luxor.FFMPEG.exe(`-loglevel panic -r $(framerate) -f image2 -i $(tempdirectory)/%10d.png -filter_complex "[0:v] split [a][b]; [a] palettegen=stats_mode=full:reserve_transparent=on:transparency_color=FFFFFF [p]; [b][p] paletteuse=new=1:alpha_threshold=128" -y $filename`)
else
Luxor.FFMPEG.ffmpeg_exe(`
-loglevel panic
-r $(framerate)
-f image2
-i $(tempdirectory)/%10d.png
-c:v libx264
-vf "pad=ceil(iw/2)*2:ceil(ih/2)*2"
-r $(framerate)
-pix_fmt yuv420p
-y $filename`)
end
show_progress && @info "Generated output at: $filename"
return filename
end | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 3091 | function LuxorGraphPlot.GraphViz(tng::TensorNetworkGraph, locs=StressLayout(); highlight::Vector=[], highlight_color = (0.0, 0.0, 255.0, 0.5), kwargs...)
white = (255.0, 255.0, 255.0, 0.8)
black = (0.0, 0.0, 0.0, 1.0)
r = (255.0, 0.0, 0.0, 0.8)
g = (0.0, 255.0, 0.0, 0.8)
colors = Vector{typeof(r)}()
text = Vector{String}()
sizes = Vector{Float64}()
for i in 1:nv(tng.graph)
if i in keys(tng.tensors_labels)
push!(colors, white)
push!(text, string(tng.tensors_labels[i]))
push!(sizes, 20.0)
else
push!(colors, r)
push!(text, string(tng.indices_labels[i]))
push!(sizes, 10.0)
end
end
for oi in tng.open_indices
id = _get_key(tng.indices_labels, oi)
colors[id] = g
end
for hl in highlight
id = _get_key(tng.indices_labels, hl)
colors[id] = highlight_color
end
return GraphViz(tng.graph, locs, texts = text, vertex_colors = colors, vertex_sizes = sizes, kwargs...)
end
function _get_key(dict::Dict, value)
for (key, val) in dict
if val == value
return key
end
end
@error "Value not found in dictionary"
end
function OMEinsumContractionOrders.ein2hypergraph(code::T) where{T <: AbstractEinsum}
ixs = getixsv(code)
iy = getiyv(code)
edges = unique!([Iterators.flatten(ixs)...])
open_edges = [iy[i] for i in 1:length(iy) if iy[i] in edges]
rows = Int[]
cols = Int[]
for (i,ix) in enumerate(ixs)
push!(rows, map(x->i, ix)...)
push!(cols, map(x->findfirst(==(x), edges), ix)...)
end
adj = sparse(rows, cols, ones(Int, length(rows)))
return LabeledHyperGraph(adj, el = edges, oe = open_edges)
end
"""
viz_eins(code::AbstractEinsum; locs=StressLayout(), filename = nothing, kwargs...)
Visualizes an `AbstractEinsum` object by creating a tensor network graph and rendering it using GraphViz.
### Arguments
- `code::AbstractEinsum`: The `AbstractEinsum` object to visualize.
### Keyword Arguments
- `locs=StressLayout()`: The coordinates or layout algorithm to use for positioning the nodes in the graph.
- `filename = nothing`: The name of the file to save the visualization to. If `nothing`, the visualization will be displayed on the screen instead of saving to a file.
- `config = GraphDisplayConfig()`: The configuration for displaying the graph. Please refer to the documentation of [`GraphDisplayConfig`](https://giggleliu.github.io/LuxorGraphPlot.jl/dev/ref/#LuxorGraphPlot.GraphDisplayConfig) for more information.
- `kwargs...`: Additional keyword arguments to be passed to the [`GraphViz`](https://giggleliu.github.io/LuxorGraphPlot.jl/dev/ref/#LuxorGraphPlot.GraphViz) constructor.
"""
function OMEinsumContractionOrders.viz_eins(code::AbstractEinsum; locs=StressLayout(), filename = nothing, config=LuxorTensorPlot.GraphDisplayConfig(), kwargs...)
tng = TensorNetworkGraph(ein2hypergraph(code))
gviz = GraphViz(tng, locs; kwargs...)
return show_graph(gviz; filename, config)
end | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 6534 | ################### The data types in OMEinsum ###################
abstract type AbstractEinsum end
struct EinCode{LT} <: AbstractEinsum
ixs::Vector{Vector{LT}}
iy::Vector{LT}
end
getixsv(rc::EinCode) = rc.ixs
getiyv(rc::EinCode) = rc.iy
Base.:(==)(a::EinCode, b::EinCode) = a.ixs == b.ixs && a.iy == b.iy
struct NestedEinsum{LT} <: AbstractEinsum
args::Vector{NestedEinsum}
tensorindex::Int # -1 if not leaf
eins::EinCode{LT}
NestedEinsum(args::Vector{NestedEinsum{LT}}, eins::EinCode) where LT = new{LT}(args, -1, eins)
NestedEinsum{LT}(arg::Int) where LT = new{LT}(NestedEinsum{LT}[], arg)
end
function Base.:(==)(a::NestedEinsum, b::NestedEinsum)
return a.args == b.args && a.tensorindex == b.tensorindex && if isdefined(a, :eins)
isdefined(b, :eins) && a.eins == b.eins
else
!isdefined(b, :eins)
end
end
isleaf(ne::NestedEinsum) = ne.tensorindex != -1
function getixsv(ne::NestedEinsum{LT}) where LT
d = collect_ixs!(ne, Dict{Int,Vector{LT}}())
ks = sort!(collect(keys(d)))
return @inbounds [d[i] for i in ks]
end
function collect_ixs!(ne::NestedEinsum, d::Dict{Int,Vector{LT}}) where LT
@inbounds for i=1:length(ne.args)
arg = ne.args[i]
if isleaf(arg)
d[arg.tensorindex] = getixsv(ne.eins)[i]
else
collect_ixs!(arg, d)
end
end
return d
end
getiyv(ne::NestedEinsum) = getiyv(ne.eins)
struct SlicedEinsum{LT,ET<:Union{EinCode{LT},NestedEinsum{LT}}} <: AbstractEinsum
slicing::Vector{LT}
eins::ET
end
Base.:(==)(a::SlicedEinsum, b::SlicedEinsum) = a.slicing == b.slicing && a.eins == b.eins
getixsv(ne::SlicedEinsum) = getixsv(ne.eins)
getiyv(ne::SlicedEinsum) = getiyv(ne.eins)
uniquelabels(code::AbstractEinsum) = unique!(vcat(getixsv(code)..., getiyv(code)))
labeltype(code::AbstractEinsum) = eltype(getiyv(code))
# Better printing
struct LeafString
str::String
end
function AbstractTrees.children(ne::NestedEinsum)
[isleaf(item) ? LeafString(_join(getixsv(ne.eins)[k])) : item for (k,item) in enumerate(ne.args)]
end
function AbstractTrees.printnode(io::IO, x::NestedEinsum)
isleaf(x) ? print(io, x.tensorindex) : print(io, x.eins)
end
AbstractTrees.printnode(io::IO, e::LeafString) = print(io, e.str)
function Base.show(io::IO, e::EinCode)
s = join([_join(ix) for ix in getixsv(e)], ", ") * " -> " * _join(getiyv(e))
print(io, s)
end
function Base.show(io::IO, e::NestedEinsum)
print_tree(io, e)
end
Base.show(io::IO, ::MIME"text/plain", e::NestedEinsum) = show(io, e)
Base.show(io::IO, ::MIME"text/plain", e::EinCode) = show(io, e)
_join(ix) = isempty(ix) ? "" : join(ix, connector(eltype(ix)))
connector(::Type{Char}) = ""
connector(::Type{Int}) = "∘"
connector(::Type) = "-"
function is_unary_or_binary(code::NestedEinsum)
if isleaf(code) return true end
if length(code.args) > 2 return false end
return all(is_unary_or_binary, code.args)
end
# reformulate the nested einsum, removing a given tensor without change the space complexity
# consider only binary contraction tree with no openedges
function pivot_tree(code::NestedEinsum{LT}, removed_tensor_id::Int) where LT
@assert is_unary_or_binary(code) "The contraction tree is not binary"
@assert isempty(getiyv(code)) "The contraction tree has open edges"
path = path_to_tensor(code, removed_tensor_id)
isempty(path) && return code # the tensor is at the root?
right = popfirst!(path)
left = right == 1 ? 2 : 1
if isleaf(code.args[left]) && isleaf(code.args[right])
ixsv = getixsv(code.eins)
return NestedEinsum([code.args[left]], EinCode([ixsv[left]], ixsv[right]))
elseif isleaf(code.args[right])
return NestedEinsum([code.args[left].args...], EinCode(getixsv(code.args[left].eins), getixsv(code.eins)[right]))
else
# update the ein code to make sure the root of the left part and the right part are the same
left_code = code.args[left]
right_code = NestedEinsum([code.args[right].args...], EinCode(getixsv(code.args[right].eins), getixsv(code.eins)[left]))
end
tree = _pivot_tree!(left_code, right_code, path)
return tree
end
function _pivot_tree!(left_code::NestedEinsum{LT}, right_code::NestedEinsum{LT}, path::Vector{Int}) where{LT}
if !isleaf(right_code)
right = popfirst!(path)
left = right == 1 ? 2 : 1
if length(right_code.args) == 1
# orign: left: a, right: b -> a
# reformulated: left: a -> b, right: b
new_eins = EinCode([getiyv(right_code.eins)], getixsv(right_code.eins)[1])
left_code = NestedEinsum([left_code], new_eins)
left_code = _pivot_tree!(left_code, right_code.args[1], path)
elseif length(right_code.args) == 2
# origin: left: a, right: b, c -> a
# reformulated: left: a, b -> c, right: c
new_eins = EinCode([getiyv(right_code.eins), getixsv(right_code.eins)[left]], getixsv(right_code.eins)[right])
left_code = NestedEinsum([left_code, right_code.args[left]], new_eins)
left_code = _pivot_tree!(left_code, right_code.args[right], path)
else
error("The contraction tree is not binary")
end
end
return left_code
end
# find the path to a given tensor in a nested einsum
function path_to_tensor(code::NestedEinsum, index::Int)
path = Vector{Int}()
_find_root!(code, index, path)
return path
end
function _find_root!(code::NestedEinsum, index::Int, path::Vector{Int})
if isleaf(code) return code.tensorindex == index end
for (i, arg) in enumerate(code.args)
if _find_root!(arg, index, path)
pushfirst!(path, i)
return true
end
end
return false
end
############### Simplifier and optimizer types #################
abstract type CodeSimplifier end
"""
MergeGreedy <: CodeSimplifier
MergeGreedy(; threshhold=-1e-12)
Contraction code simplifier (in order to reduce the time of calling optimizers) that
merges tensors greedily if the space complexity of merged tensors is reduced (difference smaller than the `threshhold`).
"""
Base.@kwdef struct MergeGreedy <: CodeSimplifier
threshhold::Float64=-1e-12
end
"""
MergeVectors <: CodeSimplifier
MergeVectors()
Contraction code simplifier (in order to reduce the time of calling optimizers) that merges vectors to closest tensors.
"""
struct MergeVectors <: CodeSimplifier end
# code optimizer
abstract type CodeOptimizer end | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 1362 | module OMEinsumContractionOrders
using JSON
using SparseArrays
using StatsBase
using Base: RefValue
using Base.Threads
using AbstractTrees
using TreeWidthSolver
using TreeWidthSolver.Graphs
export CodeOptimizer, CodeSimplifier,
KaHyParBipartite, GreedyMethod, TreeSA, SABipartite, ExactTreewidth,
MergeGreedy, MergeVectors,
uniformsize,
simplify_code, optimize_code, optimize_permute,
# time space complexity
peak_memory, flop, contraction_complexity,
label_elimination_order
# writejson, readjson are not exported to avoid namespace conflict
# visiualization tools provided by extension `LuxorTensorPlot`
export viz_eins, viz_contraction
include("Core.jl")
include("utils.jl")
# greedy method
include("incidencelist.jl")
include("greedy.jl")
# bipartition based methods
include("sa.jl")
include("kahypar.jl")
# local search method
include("treesa.jl")
# tree width method
include("treewidth.jl")
# simplification passes
include("simplify.jl")
# interfaces
include("complexity.jl")
include("interfaces.jl")
# saveload
include("json.jl")
# extension for visiualization
include("visualization.jl")
@deprecate timespacereadwrite_complexity(code, size_dict::Dict) (contraction_complexity(code, size_dict)...,)
@deprecate timespace_complexity(code, size_dict::Dict) (contraction_complexity(code, size_dict)...,)[1:2]
end
| OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 6685 | #################### compute peak memory ###########################
"""
peak_memory(code, size_dict::Dict) -> Int
Estimate peak memory in number of elements.
"""
function peak_memory(code::NestedEinsum, size_dict::Dict)
ixs = getixsv(code.eins)
iy = getiyv(code.eins)
# `largest_size` is the largest size during contraction
largest_size = 0
# `tempsize` is the memory to store contraction results from previous branches
tempsize = 0
for (i, arg) in enumerate(code.args)
if isleaf(arg)
largest_size_i = _mem(ixs[i], size_dict) + tempsize
else
largest_size_i = peak_memory(arg, size_dict) + tempsize
end
tempsize += _mem(ixs[i], size_dict)
largest_size = max(largest_size, largest_size_i)
end
# compare with currect contraction
return max(largest_size, tempsize + _mem(iy, size_dict))
end
_mem(iy, size_dict::Dict{LT,VT}) where {LT,VT} = isempty(iy) ? zero(VT) : prod(l->size_dict[l], iy)
function peak_memory(code::EinCode, size_dict::Dict)
ixs = getixsv(code)
iy = getiyv(code)
return sum(ix->_mem(ix, size_dict), ixs) + _mem(iy, size_dict)
end
function peak_memory(code::SlicedEinsum, size_dict::Dict)
size_dict_sliced = copy(size_dict)
for l in code.slicing
size_dict_sliced[l] = 1
end
return peak_memory(code.eins, size_dict_sliced) + _mem(getiyv(code.eins), size_dict)
end
###################### Time space complexity ###################
function __timespacereadwrite_complexity(ei::NestedEinsum, size_dict)
log2_sizes = Dict([k=>log2(v) for (k,v) in size_dict])
_timespacereadwrite_complexity(ei, log2_sizes)
end
function __timespacereadwrite_complexity(ei::EinCode, size_dict)
log2_sizes = Dict([k=>log2(v) for (k,v) in size_dict])
_timespacereadwrite_complexity(getixsv(ei), getiyv(ei), log2_sizes)
end
function _timespacereadwrite_complexity(ei::NestedEinsum, log2_sizes::Dict{L,VT}) where {L,VT}
isleaf(ei) && return (VT(-Inf), VT(-Inf), VT(-Inf))
tcs = VT[]
scs = VT[]
rws = VT[]
for arg in ei.args
tc, sc, rw = _timespacereadwrite_complexity(arg, log2_sizes)
push!(tcs, tc)
push!(scs, sc)
push!(rws, rw)
end
tc2, sc2, rw2 = _timespacereadwrite_complexity(getixsv(ei.eins), getiyv(ei.eins), log2_sizes)
tc = log2sumexp2([tcs..., tc2])
sc = max(reduce(max, scs), sc2)
rw = log2sumexp2([rws..., rw2])
return tc, sc, rw
end
function _timespacereadwrite_complexity(ixs::AbstractVector, iy::AbstractVector{T}, log2_sizes::Dict{L,VT}) where {T, L, VT}
loop_inds = get_loop_inds(ixs, iy)
tc = isempty(loop_inds) ? zero(VT) : sum(l->log2_sizes[l], loop_inds)
sc = isempty(iy) ? zero(VT) : sum(l->log2_sizes[l], iy)
rw = log2sumexp2([[isempty(ix) ? zero(VT) : sum(l->log2_sizes[l], ix) for ix in ixs]..., sc])
return tc, sc, rw
end
function get_loop_inds(ixs::AbstractVector, iy::AbstractVector{LT}) where {LT}
# remove redundant legs
counts = Dict{LT,Int}()
for ix in ixs
for l in ix
if haskey(counts, l)
counts[l] += 1
else
counts[l] = 1
end
end
end
for l in iy
if haskey(counts, l)
counts[l] += 1
else
counts[l] = 1
end
end
loop_inds = LT[]
for ix in ixs
for l in ix
c = count(==(l), ix)
if counts[l] > c && l ∉ loop_inds
push!(loop_inds, l)
end
end
end
return loop_inds
end
"""
flop(eincode, size_dict) -> Int
Returns the number of iterations, which is different with the true floating point operations (FLOP) by a factor of 2.
"""
function flop(ei::EinCode, size_dict::Dict{LT,VT}) where {LT,VT}
loop_inds = uniquelabels(ei)
return isempty(loop_inds) ? zero(VT) : prod(l->size_dict[l], loop_inds)
end
function flop(ei::NestedEinsum, size_dict::Dict{L,VT}) where {L,VT}
isleaf(ei) && return zero(VT)
return sum(ei.args) do arg
flop(arg, size_dict)
end + flop(ei.eins, size_dict)
end
############### Sliced methods ##################
function __timespacereadwrite_complexity(code::SlicedEinsum, size_dict)
size_dict_sliced = copy(size_dict)
for l in code.slicing
size_dict_sliced[l] = 1
end
tc, sc, rw = __timespacereadwrite_complexity(code.eins, size_dict_sliced)
sliceoverhead = sum(log2.(getindex.(Ref(size_dict), code.slicing)))
tc + sliceoverhead, sc, rw+sliceoverhead
end
function flop(code::SlicedEinsum, size_dict)
size_dict_sliced = copy(size_dict)
for l in code.slicing
size_dict_sliced[l] = 1
end
fl = flop(code.eins, size_dict_sliced)
fl * prod(getindex.(Ref(size_dict), code.slicing))
end
uniformsize(code::AbstractEinsum, size) = Dict([l=>size for l in uniquelabels(code)])
"""
label_elimination_order(code) -> Vector
Returns a vector of labels sorted by the order they are eliminated in the contraction tree.
The contraction tree is specified by `code`, which e.g. can be a `NestedEinsum` instance.
"""
label_elimination_order(code::NestedEinsum) = label_elimination_order!(code, labeltype(code)[])
function label_elimination_order!(code, eliminated_vertices)
isleaf(code) && return eliminated_vertices
for arg in code.args
label_elimination_order!(arg, eliminated_vertices)
end
append!(eliminated_vertices, setdiff(vcat(getixsv(code.eins)...), getiyv(code.eins)))
return eliminated_vertices
end
label_elimination_order(code::SlicedEinsum) = label_elimination_order(code.eins)
# to replace timespacereadwrite_complexity
struct ContractionComplexity
tc::Float64
sc::Float64
rwc::Float64
end
function Base.show(io::IO, cc::ContractionComplexity)
print(io, "Time complexity: 2^$(cc.tc)
Space complexity: 2^$(cc.sc)
Read-write complexity: 2^$(cc.rwc)")
end
Base.iterate(cc::ContractionComplexity) = Base.iterate((cc.tc, cc.sc, cc.rwc))
Base.iterate(cc::ContractionComplexity, state) = Base.iterate((cc.tc, cc.sc, cc.rwc), state)
"""
contraction_complexity(eincode, size_dict) -> ContractionComplexity
Returns the time, space and read-write complexity of the einsum contraction.
The returned object contains 3 fields:
* time complexity `tc` defined as `log2(number of element-wise multiplications)`.
* space complexity `sc` defined as `log2(size of the maximum intermediate tensor)`.
* read-write complexity `rwc` defined as `log2(the number of read-write operations)`.
"""
contraction_complexity(code::AbstractEinsum, size_dict) = ContractionComplexity(__timespacereadwrite_complexity(code, size_dict)...)
| OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 12138 | struct ContractionTree
left
right
end
struct LegInfo{ET}
# We use number 0, 1, 2 to denote the output tensor, the first input tensor and the second input tensor,and use e.g. `l01` to denote the set of labels that appear in both the output tensor and the input tensor.
l1::Vector{ET}
l2::Vector{ET}
l12::Vector{ET}
l01::Vector{ET}
l02::Vector{ET}
l012::Vector{ET}
end
"""
tree_greedy(incidence_list, log2_sizes; α = 0.0, temperature = 0.0, nrepeat=10)
Compute greedy order, and the time and space complexities, the rows of the `incidence_list` are vertices and columns are edges.
`log2_sizes` are defined on edges.
`α` is the parameter for the loss function, for pairwise interaction, L = size(out) - α * (size(in1) + size(in2))
`temperature` is the parameter for sampling, if it is zero, the minimum loss is selected; for non-zero, the loss is selected by the Boltzmann distribution, given by p ~ exp(-loss/temperature).
```julia
julia> code = ein"(abc,cde),(ce,sf,j),ak->ael"
aec, ec, ak -> ael
├─ ce, sf, j -> ec
│ ├─ sf
│ ├─ j
│ └─ ce
├─ ak
└─ abc, cde -> aec
├─ cde
└─ abc
julia> optimize_greedy(code, Dict([c=>2 for c in "abcdefjkls"]))
ae, ak -> ea
├─ ak
└─ aec, ec -> ae
├─ ce, -> ce
│ ├─ sf, j ->
│ │ ├─ j
│ │ └─ sf
│ └─ ce
└─ abc, cde -> aec
├─ cde
└─ abc
```
"""
function tree_greedy(incidence_list::IncidenceList{VT,ET}, log2_edge_sizes; α::TA = 0.0, temperature::TT = 0.0, nrepeat=10) where {VT,ET,TA,TT}
@assert nrepeat >= 1
results = Vector{Tuple{ContractionTree, Vector{Float64}, Vector{Float64}}}(undef, nrepeat)
@threads for i = 1:nrepeat
results[i] = _tree_greedy(incidence_list, log2_edge_sizes; α = α, temperature = temperature)
end
best_sc = minimum([maximum(r[3]) for r in results])
possible_ids = findall(x -> maximum(x[3]) == best_sc, results)
possible_results = results[possible_ids]
best_tree, best_tcs, best_scs = results[argmin([log2sumexp2(r[2]) for r in possible_results])]
return best_tree, best_tcs, best_scs
end
function _tree_greedy(incidence_list::IncidenceList{VT,ET}, log2_edge_sizes; α::TA = 0.0, temperature::TT = 0.0) where {VT,ET,TA,TT}
incidence_list = copy(incidence_list)
n = nv(incidence_list)
if n == 0
return nothing
elseif n == 1
return collect(vertices(incidence_list))[1]
end
log2_tcs = Float64[] # time complexity
log2_scs = Float64[]
tree = Dict{VT,Any}([v=>v for v in vertices(incidence_list)])
cost_values = evaluate_costs(α, incidence_list, log2_edge_sizes)
while true
if length(cost_values) == 0
vpool = collect(vertices(incidence_list))
pair = minmax(vpool[1], vpool[2]) # to prevent empty intersect
else
pair = find_best_cost(temperature, cost_values)
end
log2_tc_step, sc, code = contract_pair!(incidence_list, pair..., log2_edge_sizes)
push!(log2_tcs, log2_tc_step)
push!(log2_scs, space_complexity(incidence_list, log2_edge_sizes))
if nv(incidence_list) > 1
tree[pair[1]] = ContractionTree(tree[pair[1]], tree[pair[2]])
else
return ContractionTree(tree[pair[1]], tree[pair[2]]), log2_tcs, log2_scs
end
update_costs!(cost_values, pair..., α, incidence_list, log2_edge_sizes)
end
end
function contract_pair!(incidence_list, vi, vj, log2_edge_sizes)
log2dim(legs) = isempty(legs) ? 0 : sum(l->log2_edge_sizes[l], legs) # for 1.5, you need this patch because `init` kw is not allowed.
# compute time complexity and output tensor
legsets = analyze_contraction(incidence_list, vi, vj)
D12,D01,D02,D012 = log2dim.(getfield.(Ref(legsets),3:6))
tc = D12+D01+D02+D012 # dangling legs D1 and D2 do not contribute
# einsum code
eout = legsets.l01 ∪ legsets.l02 ∪ legsets.l012
code = (edges(incidence_list, vi), edges(incidence_list, vj)) => eout
sc = log2dim(eout)
# change incidence_list
delete_vertex!(incidence_list, vj)
change_edges!(incidence_list, vi, eout)
for e in eout
replace_vertex!(incidence_list, e, vj=>vi)
end
remove_edges!(incidence_list, legsets.l1 ∪ legsets.l2 ∪ legsets.l12)
return tc, sc, code
end
function evaluate_costs(α::TA, incidence_list::IncidenceList{VT,ET}, log2_edge_sizes) where {VT,ET,TA}
# initialize cost values
cost_values = Dict{Tuple{VT,VT},Float64}()
for vi = vertices(incidence_list)
for vj in neighbors(incidence_list, vi)
if vj > vi
cost_values[(vi,vj)] = greedy_loss(α, incidence_list, log2_edge_sizes, vi, vj)
end
end
end
return cost_values
end
function update_costs!(cost_values, va, vb, α::TA, incidence_list::IncidenceList{VT,ET}, log2_edge_sizes) where {VT,ET,TA}
for vj in neighbors(incidence_list, va)
vx, vy = minmax(vj, va)
cost_values[(vx,vy)] = greedy_loss(α, incidence_list, log2_edge_sizes, vx, vy)
end
for k in keys(cost_values)
if vb ∈ k
delete!(cost_values, k)
end
end
end
function find_best_cost(temperature::TT, cost_values::Dict{PT}) where {PT,TT}
length(cost_values) < 1 && error("cost value information missing")
if iszero(temperature)
minval = minimum(Base.values(cost_values))
pairs = PT[]
for (k, v) in cost_values
if v == minval
push!(pairs, k)
end
end
return rand(pairs)
else
return sample_best_cost(cost_values, temperature)
end
end
function sample_best_cost(cost_values::Dict{PT}, t::T) where {PT, T}
length(cost_values) < 1 && error("cost value information missing")
vals = [v for v in values(cost_values)]
prob = exp.( - vals ./ t)
vc = [k for (k, v) in cost_values]
sample(vc, Weights(prob))
end
function analyze_contraction(incidence_list::IncidenceList{VT,ET}, vi::VT, vj::VT) where {VT,ET}
ei = edges(incidence_list, vi)
ej = edges(incidence_list, vj)
leg012,leg12,leg1,leg2,leg01,leg02 = ET[], ET[], ET[], ET[], ET[], ET[]
# external legs
for leg in ei ∪ ej
isext = leg ∈ incidence_list.openedges || !all(x->x==vi || x==vj, vertices(incidence_list, leg))
if isext
if leg ∈ ei
if leg ∈ ej
push!(leg012, leg)
else
push!(leg01, leg)
end
else
push!(leg02, leg)
end
else
if leg ∈ ei
if leg ∈ ej
push!(leg12, leg)
else
push!(leg1, leg)
end
else
push!(leg2, leg)
end
end
end
return LegInfo(leg1, leg2, leg12, leg01, leg02, leg012)
end
function greedy_loss(α, incidence_list, log2_edge_sizes, vi, vj)
log2dim(legs) = isempty(legs) ? 0 : sum(l->log2_edge_sizes[l], legs) # for 1.5, you need this patch because `init` kw is not allowed.
legs = analyze_contraction(incidence_list, vi, vj)
D1,D2,D12,D01,D02,D012 = log2dim.(getfield.(Ref(legs), 1:6))
loss = exp2(D01+D02+D012) - α * (exp2(D01+D12+D012) + exp2(D02+D12+D012)) # out - in
return loss
end
function space_complexity(incidence_list, log2_sizes)
sc = 0.0
for v in vertices(incidence_list)
for e in edges(incidence_list, v)
sc += log2_sizes[e]
end
end
return sc
end
function contract_tree!(incidence_list::IncidenceList, tree::ContractionTree, log2_edge_sizes, tcs, scs)
vi = tree.left isa ContractionTree ? contract_tree!(incidence_list, tree.left, log2_edge_sizes, tcs, scs) : tree.left
vj = tree.right isa ContractionTree ? contract_tree!(incidence_list, tree.right, log2_edge_sizes, tcs, scs) : tree.right
tc, sc, code = contract_pair!(incidence_list, vi, vj, log2_edge_sizes)
push!(tcs, tc)
push!(scs, sc)
return vi
end
#################### parse to code ####################
function parse_eincode!(::IncidenceList{IT,LT}, tree, vertices_order, level=0) where {IT,LT}
ti = findfirst(==(tree), vertices_order)
ti, NestedEinsum{LT}(ti)
end
function parse_eincode!(incidence_list::IncidenceList{IT,LT}, tree::ContractionTree, vertices_order, level=0) where {IT,LT}
ti, codei = parse_eincode!(incidence_list, tree.left, vertices_order, level+1)
tj, codej = parse_eincode!(incidence_list, tree.right, vertices_order, level+1)
dummy = Dict([e=>0 for e in keys(incidence_list.e2v)])
_, _, code = contract_pair!(incidence_list, vertices_order[ti], vertices_order[tj], dummy)
ti, NestedEinsum([codei, codej], EinCode([code.first...], level==0 ? incidence_list.openedges : code.second))
end
function parse_eincode(incidence_list::IncidenceList, tree::ContractionTree; vertices = collect(keys(incidence_list.v2e)))
parse_eincode!(copy(incidence_list), tree, vertices)[2]
end
function parse_nested(code::EinCode{LT}, tree::ContractionTree) where LT
ixs, iy = getixsv(code), getiyv(code)
incidence_list = IncidenceList(Dict([i=>ixs[i] for i=1:length(ixs)]); openedges=iy)
parse_eincode!(incidence_list, tree, 1:length(ixs))[2]
end
function parse_tree(ein, vertices)
if isleaf(ein)
vertices[ein.tensorindex]
else
if length(ein.args) != 2
error("This eincode is not a binary tree.")
end
left, right = parse_tree.(ein.args, Ref(vertices))
ContractionTree(left, right)
end
end
"""
optimize_greedy(eincode, size_dict; α = 0.0, temperature = 0.0, nrepeat=10)
Greedy optimizing the contraction order and return a `NestedEinsum` object.
Check the docstring of `tree_greedy` for detailed explaination of other input arguments.
"""
function optimize_greedy(code::EinCode{L}, size_dict::Dict; α::TA = 0.0, temperature::TT = 0.0, nrepeat=10) where {L,TA,TT}
optimize_greedy(getixsv(code), getiyv(code), size_dict; α = α, temperature = temperature, nrepeat=nrepeat)
end
function optimize_greedy(ixs::AbstractVector{<:AbstractVector}, iy::AbstractVector, size_dict::Dict{L,TI}; α::TA = 0.0, temperature::TT = 0.0, nrepeat=10) where {L, TI, TA, TT}
if length(ixs) <= 2
return NestedEinsum(NestedEinsum{L}.(1:length(ixs)), EinCode(ixs, iy))
end
log2_edge_sizes = Dict{L,Float64}()
for (k, v) in size_dict
log2_edge_sizes[k] = log2(v)
end
incidence_list = IncidenceList(Dict([i=>ixs[i] for i=1:length(ixs)]); openedges=iy)
tree, _, _ = tree_greedy(incidence_list, log2_edge_sizes; α = α, temperature = temperature, nrepeat=nrepeat)
parse_eincode!(incidence_list, tree, 1:length(ixs))[2]
end
function optimize_greedy(code::NestedEinsum, size_dict; α::TA = 0.0, temperature::TT = 0.0, nrepeat=10) where {TT, TA}
isleaf(code) && return code
args = optimize_greedy.(code.args, Ref(size_dict); α = α, temperature = temperature, nrepeat=nrepeat)
if length(code.args) > 2
# generate coarse grained hypergraph.
nested = optimize_greedy(code.eins, size_dict; α = α, temperature = temperature, nrepeat=nrepeat)
replace_args(nested, args)
else
NestedEinsum(args, code.eins)
end
end
function replace_args(nested::NestedEinsum{LT}, trueargs) where LT
isleaf(nested) && return trueargs[nested.tensorindex]
NestedEinsum(replace_args.(nested.args, Ref(trueargs)), nested.eins)
end
"""
GreedyMethod{MT}
GreedyMethod(; α = 0.0, temperature = 0.0, nrepeat=10)
The fast but poor greedy optimizer. Input arguments are
* `α` is the parameter for the loss function, for pairwise interaction, L = size(out) - α * (size(in1) + size(in2))
* `temperature` is the parameter for sampling, if it is zero, the minimum loss is selected; for non-zero, the loss is selected by the Boltzmann distribution, given by p ~ exp(-loss/temperature).
* `nrepeat` is the number of repeatition, returns the best contraction order.
"""
Base.@kwdef struct GreedyMethod{TA, TT} <: CodeOptimizer
α::TA = 0.0
temperature::TT = 0.0
nrepeat::Int = 10
end | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 2026 | struct IncidenceList{VT,ET}
v2e::Dict{VT,Vector{ET}}
e2v::Dict{ET,Vector{VT}}
openedges::Vector{ET}
end
function IncidenceList(v2e::Dict{VT,Vector{ET}}; openedges=ET[]) where {VT,ET}
e2v = Dict{ET,Vector{VT}}()
for (v, es) in v2e
for e in es
if haskey(e2v, e)
push!(e2v[e], v)
else
e2v[e] = [v]
end
end
end
IncidenceList(v2e, e2v, openedges)
end
Base.copy(il::IncidenceList) = IncidenceList(deepcopy(il.v2e), deepcopy(il.e2v), copy(il.openedges))
function neighbors(il::IncidenceList{VT}, v) where VT
res = VT[]
for e in il.v2e[v]
for vj in il.e2v[e]
v != vj && push!(res, vj)
end
end
return unique!(res)
end
vertices(il::IncidenceList) = keys(il.v2e)
vertices(il::IncidenceList, e) = il.e2v[e]
vertex_degree(il::IncidenceList, v) = length(il.v2e[v])
edge_degree(il::IncidenceList, e) = length(il.e2v[v])
edges(il::IncidenceList, v) = il.v2e[v]
nv(il::IncidenceList) = length(il.v2e)
ne(il::IncidenceList) = length(il.e2v)
function delete_vertex!(incidence_list::IncidenceList{VT,ET}, vj::VT) where {VT,ET}
edges = pop!(incidence_list.v2e, vj)
for e in edges
vs = vertices(incidence_list, e)
res = findfirst(==(vj), vs)
if res !== nothing
deleteat!(vs, res)
end
end
return incidence_list
end
function change_edges!(incidence_list, vi, es)
incidence_list.v2e[vi] = es
return incidence_list
end
function remove_edges!(incidence_list, es)
for e in es
delete!(incidence_list.e2v, e)
end
return incidence_list
end
function replace_vertex!(incidence_list, e, pair)
el = incidence_list.e2v[e]
if pair.first ∈ el
if pair.second ∈ el
deleteat!(el, findfirst(==(pair.first), el))
else
replace!(el, pair)
end
else
if pair.second ∉ el
push!(el, pair.second)
end
end
return incidence_list
end | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 2902 | """
optimize_code(eincode, size_dict, optimizer = GreedyMethod(), simplifier=nothing, permute=true) -> optimized_eincode
Optimize the einsum contraction code and reduce the time/space complexity of tensor network contraction.
Returns a `NestedEinsum` instance. Input arguments are
* `eincode` is an einsum contraction code instance, one of `DynamicEinCode`, `StaticEinCode` or `NestedEinsum`.
* `size` is a dictionary of "edge label=>edge size" that contains the size information, one can use `uniformsize(eincode, 2)` to create a uniform size.
* `optimizer` is a `CodeOptimizer` instance, should be one of `GreedyMethod`, `ExactTreewidth`, `KaHyParBipartite`, `SABipartite` or `TreeSA`. Check their docstrings for details.
* `simplifier` is one of `MergeVectors` or `MergeGreedy`.
* optimize the permutation if `permute` is true.
### Examples
```julia
julia> using OMEinsum
julia> code = ein"ij, jk, kl, il->"
ij, jk, kl, il ->
```
```
julia> optimize_code(code, uniformsize(code, 2), TreeSA())
SlicedEinsum{Char, NestedEinsum{DynamicEinCode{Char}}}(Char[], ki, ki ->
├─ jk, ij -> ki
│ ├─ jk
│ └─ ij
└─ kl, il -> ki
├─ kl
└─ il
)
```
"""
function optimize_code(code::Union{EinCode, NestedEinsum}, size_dict::Dict, optimizer::CodeOptimizer, simplifier=nothing, permute::Bool=true)
if simplifier === nothing
optcode = _optimize_code(code, size_dict, optimizer)
else
simpl, code = simplify_code(code, size_dict, simplifier)
optcode0 = _optimize_code(code, size_dict, optimizer)
optcode = embed_simplifier(optcode0, simpl)
end
if permute
optimize_permute(optcode, 0)
end
end
simplify_code(code::Union{EinCode, NestedEinsum}, size_dict, ::MergeVectors) = merge_vectors(code)
simplify_code(code::Union{EinCode, NestedEinsum}, size_dict, method::MergeGreedy) = merge_greedy(code, size_dict; threshhold=method.threshhold)
function _optimize_code(code, size_dict, optimizer::KaHyParBipartite)
recursive_bipartite_optimize(optimizer, code, size_dict)
end
function _optimize_code(code, size_dict, optimizer::GreedyMethod)
optimize_greedy(code, size_dict; α = optimizer.α, temperature = optimizer.temperature, nrepeat=optimizer.nrepeat)
end
function _optimize_code(code, size_dict, optimizer::ExactTreewidth)
optimize_exact_treewidth(optimizer, code, size_dict)
end
function _optimize_code(code, size_dict, optimizer::SABipartite)
recursive_bipartite_optimize(optimizer, code, size_dict)
end
function _optimize_code(code, size_dict, optimizer::TreeSA)
optimize_tree(code, size_dict; sc_target=optimizer.sc_target, βs=optimizer.βs,
ntrials=optimizer.ntrials, niters=optimizer.niters, nslices=optimizer.nslices,
sc_weight=optimizer.sc_weight, rw_weight=optimizer.rw_weight, initializer=optimizer.initializer,
greedy_method=optimizer.greedy_config, fixed_slices=optimizer.fixed_slices)
end
| OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 1973 |
function writejson(filename::AbstractString, ne::Union{NestedEinsum, SlicedEinsum})
dict = _todict(ne)
open(filename, "w") do f
JSON.print(f, dict, 0)
end
end
function _todict(ne::SlicedEinsum)
dict = _todict(ne.eins)
dict["slices"] = ne.slicing
return dict
end
function _todict(ne::NestedEinsum)
LT = labeltype(ne)
dict = Dict{String,Any}("label-type"=>string(LT), "inputs"=>getixsv(ne), "output"=>getiyv(ne))
dict["tree"] = todict(ne)
return dict
end
function readjson(filename::AbstractString)
dict = JSON.parsefile(filename)
return _fromdict(dict)
end
function _fromdict(dict)
lt = dict["label-type"]
LT = if lt == "Char"
Char
elseif lt ∈ ("Int64", "Int", "Int32")
Int
else
error("label type `$lt` not known.")
end
ne = fromdict(LT, dict["tree"])
if haskey(dict, "slices")
return SlicedEinsum(LT[_convert(LT, l) for l in dict["slices"]], ne)
else
return ne
end
end
function todict(ne::NestedEinsum)
dict = Dict{String,Any}()
if isleaf(ne)
dict["isleaf"] = true
dict["tensorindex"] = ne.tensorindex
return dict
end
dict["args"] = collect(todict.(ne.args))
dict["eins"] = einstodict(ne.eins)
dict["isleaf"] = false
return dict
end
function einstodict(eins::EinCode)
ixs = getixsv(eins)
iy = getiyv(eins)
return Dict("ixs"=>ixs, "iy"=>iy)
end
function fromdict(::Type{LT}, dict::Dict) where LT
if dict["isleaf"]
return NestedEinsum{LT}(dict["tensorindex"])
end
eins = einsfromdict(LT, dict["eins"])
return NestedEinsum(fromdict.(LT, dict["args"]), eins)
end
function einsfromdict(::Type{LT}, dict::Dict) where LT
return EinCode([collect(LT, _convert.(LT, ix)) for ix in dict["ixs"]], collect(LT, _convert.(LT, dict["iy"])))
end
_convert(::Type{LT}, x) where LT = convert(LT, x)
_convert(::Type{Char}, x::String) = (@assert length(x)==1; x[1])
| OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 10113 | """
KaHyParBipartite{RT,IT,GM}
KaHyParBipartite(; sc_target, imbalances=collect(0.0:0.005:0.8),
max_group_size=40, greedy_config=GreedyMethod())
Optimize the einsum code contraction order using the KaHyPar + Greedy approach.
This program first recursively cuts the tensors into several groups using KaHyPar,
with maximum group size specifed by `max_group_size` and maximum space complexity specified by `sc_target`,
Then finds the contraction order inside each group with the greedy search algorithm. Other arguments are
* `sc_target` is the target space complexity, defined as `log2(number of elements in the largest tensor)`,
* `imbalances` is a KaHyPar parameter that controls the group sizes in hierarchical bipartition,
* `max_group_size` is the maximum size that allowed to used greedy search,
* `greedy_config` is a greedy optimizer.
### References
* [Hyper-optimized tensor network contraction](https://arxiv.org/abs/2002.01935)
* [Simulating the Sycamore quantum supremacy circuits](https://arxiv.org/abs/2103.03074)
"""
Base.@kwdef struct KaHyParBipartite{RT,IT,SO} <: CodeOptimizer
sc_target::RT
imbalances::IT = 0.0:0.005:0.8
max_group_size::Int = 40
sub_optimizer::SO = GreedyMethod()
end
function induced_subhypergraph(s::SparseMatrixCSC, group)
s0 = s[group,:]
nvs = vec(sum(s0, dims=1))
remaining_edges = findall(!iszero, nvs)
s0[:,remaining_edges], remaining_edges
end
function convert2int(sizes::AbstractVector)
round.(Int, sizes .* 100)
end
function bipartite_sc(bipartiter, adj::SparseMatrixCSC, vertices, log2_sizes)
error("""Guess you are trying to use the `KaHyParBipartite` optimizer.
Then you need to add `using KaHyPar` first!""")
end
# the space complexity (external degree of freedoms) if we contract this group
function group_sc(adj, group, log2_sizes)
degree_in = sum(adj[group,:], dims=1)
degree_all = sum(adj, dims=1)
sum(i->(degree_in[i]!=0 && degree_in[i]!=degree_all[i] ? Float64(log2_sizes[i]) : 0.0), 1:size(adj,2))
end
function bipartition_recursive(bipartiter, adj::SparseMatrixCSC, vertices::AbstractVector{T}, log2_sizes) where T
if length(vertices) > bipartiter.max_group_size
parts = bipartite_sc(bipartiter, adj, vertices, log2_sizes)
groups = Vector{T}[]
for part in parts
for component in _connected_components(adj, part)
push!(groups, component)
end
end
newparts = [bipartition_recursive(bipartiter, adj, groups[i], log2_sizes) for i=1:length(groups)]
if length(groups) > 2
# number of groups is small (2 or 3), thus I think it is not necessary to use more complex methods.
tree = coarse_grained_optimize(adj, groups, log2_sizes, GreedyMethod())
return map_tree_to_parts(tree, newparts)
else
return newparts
end
else
return [vertices]
end
end
function _connected_components(adj, part::AbstractVector{T}) where T
A = adj[part,:]
A = A * A' # connectivility matrix
n = length(part)
visit_mask = zeros(Bool, n)
groups = Vector{T}[]
while !all(visit_mask)
newset = Int[]
push_connected!(newset, visit_mask, A, findfirst(==(false), visit_mask))
push!(groups, getindex.(Ref(part), newset))
end
return groups
end
function push_connected!(set, visit_mask, adj, i)
visit_mask[i] = true
push!(set, i)
for v = 1:size(adj, 2)
if !visit_mask[v] && !iszero(adj[i,v])
push_connected!(set, visit_mask, adj, v)
end
end
end
# parts are vectors of Ts
function coarse_grained_optimize(adj, parts, log2_sizes, sub_optimizer)
incidence_list = get_coarse_grained_graph(adj, parts)
log2_edge_sizes = Dict([i=>log2_sizes[i] for i=1:length(log2_sizes)])
tree, _, _ = tree_greedy(incidence_list, log2_edge_sizes; α = sub_optimizer.α, temperature = sub_optimizer.temperature, nrepeat=sub_optimizer.nrepeat)
return tree
end
function get_coarse_grained_graph(adj, parts)
ADJ = vcat([sum(adj[part,:], dims=1) for part in parts]...)
degree_in = sum(ADJ, dims=1)
degree_all = sum(adj, dims=1)
openedges = filter(i->degree_in[i]!=0 && degree_in[i]!=degree_all[i], 1:size(adj, 2))
v2e = Dict{Int,Vector{Int}}()
for v=1:size(ADJ, 1)
v2e[v] = findall(!iszero, view(ADJ,v,:))
end
incidence_list = IncidenceList(v2e; openedges=openedges)
return incidence_list
end
function map_tree_to_parts(tree, parts)
if tree isa ContractionTree
[map_tree_to_parts(tree.left, parts), map_tree_to_parts(tree.right, parts)]
else
parts[tree]
end
end
# KaHyPar
function adjacency_matrix(ixs::AbstractVector)
rows = Int[]
cols = Int[]
edges = unique!([Iterators.flatten(ixs)...])
for (i,ix) in enumerate(ixs)
push!(rows, map(x->i, ix)...)
push!(cols, map(x->findfirst(==(x), edges), ix)...)
end
return sparse(rows, cols, ones(Int, length(rows)), length(ixs), length(edges)), edges
end
# legacy interface
"""
optimize_kahypar(code, size_dict; sc_target, max_group_size=40, imbalances=0.0:0.01:0.2, greedy_method=MinSpaceOut(), greedy_nrepeat=10)
Optimize the einsum `code` contraction order using the KaHyPar + Greedy approach. `size_dict` is a dictionary that specifies leg dimensions.
Check the docstring of `KaHyParBipartite` for detailed explaination of other input arguments.
"""
function optimize_kahypar(code::EinCode, size_dict; sc_target, max_group_size=40, imbalances=0.0:0.01:0.2, sub_optimizer=GreedyMethod())
bipartiter = KaHyParBipartite(; sc_target=sc_target, max_group_size=max_group_size, imbalances=imbalances, sub_optimizer = sub_optimizer)
recursive_bipartite_optimize(bipartiter, code, size_dict)
end
function recursive_bipartite_optimize(bipartiter, code::EinCode, size_dict)
ixs, iy = getixsv(code), getiyv(code)
ixv = [ixs..., iy]
adj, edges = adjacency_matrix(ixv)
vertices=collect(1:length(ixv))
parts = bipartition_recursive(bipartiter, adj, vertices, [log2(size_dict[e]) for e in edges])
optcode = recursive_construct_nestedeinsum(ixv, empty(iy), parts, size_dict, 0, bipartiter.sub_optimizer)
return pivot_tree(optcode, length(ixs) + 1)
end
maplocs(ne::NestedEinsum{ET}, parts) where ET = isleaf(ne) ? NestedEinsum{ET}(parts[ne.tensorindex]) : NestedEinsum(maplocs.(ne.args, Ref(parts)), ne.eins)
function kahypar_recursive(ne::NestedEinsum; log2_size_dict, sc_target, min_size, imbalances=0.0:0.04:0.8)
if length(ne.args >= min_size) && all(isleaf, ne.args)
bipartite_eincode(adj, ne.args, ne.eins; log2_size_dict=log2_size_dict, sc_target=sc_target, min_size=min_size, imbalances=imbalances)
end
kahypar_recursive(ne.args; log2_size_dict, sc_target=sc_target, min_size=min_size, imbalances=imbalances)
end
recursive_flatten(obj::Tuple) = vcat(recursive_flatten.(obj)...)
recursive_flatten(obj::AbstractVector) = vcat(recursive_flatten.(obj)...)
recursive_flatten(obj) = obj
"""
optimize_kahypar_auto(code, size_dict; max_group_size=40, sub_optimizer = GreedyMethod())
Find the optimal contraction order automatically by determining the `sc_target` with bisection.
It can fail if the tree width of your graph is larger than `100`.
"""
function optimize_kahypar_auto(code::EinCode, size_dict; max_group_size=40, effort=500, sub_optimizer=GreedyMethod())
sc_high = 100
sc_low = 1
order_high = optimize_kahypar(code, size_dict; sc_target=sc_high, max_group_size=max_group_size, imbalances=0.0:0.6/effort*(sc_high-sc_low):0.6, sub_optimizer = sub_optimizer)
_optimize_kahypar_auto(code, size_dict, sc_high, order_high, sc_low, max_group_size, effort, sub_optimizer)
end
function _optimize_kahypar_auto(code::EinCode, size_dict, sc_high, order_high, sc_low, max_group_size, effort, sub_optimizer)
if sc_high <= sc_low + 1
order_high
else
sc_mid = (sc_high + sc_low) ÷ 2
try
order_mid = optimize_kahypar(code, size_dict; sc_target=sc_mid, max_group_size=max_group_size, imbalances=0.0:0.6/effort*(sc_high-sc_low):0.6, sub_optimizer = sub_optimizer)
order_high, sc_high = order_mid, sc_mid
# `sc_target` too high
catch
# `sc_target` too low
sc_low = sc_mid
end
_optimize_kahypar_auto(code, size_dict, sc_high, order_high, sc_low, max_group_size, effort, sub_optimizer)
end
end
function recursive_construct_nestedeinsum(ixs::AbstractVector{<:AbstractVector}, iy::AbstractVector{L}, parts::AbstractVector, size_dict, level, sub_optimizer) where L
if length(parts) == 2
# code is a nested einsum
code1 = recursive_construct_nestedeinsum(ixs, iy, parts[1], size_dict, level+1, sub_optimizer)
code2 = recursive_construct_nestedeinsum(ixs, iy, parts[2], size_dict, level+1, sub_optimizer)
AB = recursive_flatten(parts[2]) ∪ recursive_flatten(parts[1])
inset12, outset12 = ixs[AB], ixs[setdiff(1:length(ixs), AB)]
iy12 = Iterators.flatten(inset12) ∩ (Iterators.flatten(outset12) ∪ iy)
iy1, iy2 = getiyv(code1.eins), getiyv(code2.eins)
return NestedEinsum([code1, code2], EinCode([iy1, iy2], L[(level==0 ? iy : iy12)...]))
elseif length(parts) == 1
return recursive_construct_nestedeinsum(ixs, iy, parts[1], size_dict, level, sub_optimizer)
else
error("not a bipartition, got size $(length(parts))")
end
end
function recursive_construct_nestedeinsum(ixs::AbstractVector{<:AbstractVector}, iy::AbstractVector{L}, parts::AbstractVector{<:Integer}, size_dict, level, sub_optimizer) where L
if isempty(parts)
error("got empty group!")
end
inset, outset = ixs[parts], ixs[setdiff(1:length(ixs), parts)]
iy1 = level == 0 ? iy : Iterators.flatten(inset) ∩ (Iterators.flatten(outset) ∪ iy)
res = optimize_code(EinCode(inset, iy1), size_dict, sub_optimizer)
if res isa SlicedEinsum
@assert length(res.slicing) == 0
res = res.eins
end
return maplocs(res, parts)
end
| OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 9871 | """
SABipartite{RT,BT}
SABipartite(; sc_target=25, ntrials=50, βs=0.1:0.2:15.0, niters=1000
max_group_size=40, greedy_config=GreedyMethod(), initializer=:random)
Optimize the einsum code contraction order using the Simulated Annealing bipartition + Greedy approach.
This program first recursively cuts the tensors into several groups using simulated annealing,
with maximum group size specifed by `max_group_size` and maximum space complexity specified by `sc_target`,
Then finds the contraction order inside each group with the greedy search algorithm. Other arguments are
* `size_dict`, a dictionary that specifies leg dimensions,
* `sc_target` is the target space complexity, defined as `log2(number of elements in the largest tensor)`,
* `max_group_size` is the maximum size that allowed to used greedy search,
* `βs` is a list of inverse temperature `1/T`,
* `niters` is the number of iteration in each temperature,
* `ntrials` is the number of repetition (with different random seeds),
* `sub_optimizer`, the optimizer for the bipartited sub graphs, one can choose `GreedyMethod()` or `TreeSA()`,
* `initializer`, the partition configuration initializer, one can choose `:random` or `:greedy` (slow but better).
### References
* [Hyper-optimized tensor network contraction](https://arxiv.org/abs/2002.01935)
"""
Base.@kwdef struct SABipartite{RT,BT,SO} <: CodeOptimizer
sc_target::RT = 25
ntrials::Int = 50 # number of trials
βs::BT = 0.1:0.2:15.0 # temperatures
niters::Int = 1000 # number of iterations in each temperature
max_group_size::Int = 40
# configure greedy algorithm
sub_optimizer::SO = GreedyMethod()
initializer::Symbol = :random
end
struct PartitionState
config::Vector{Int}
loss::RefValue{Float64}
group_sizes::Vector{Int} # group sizes
group_scs::Vector{Float64} # space complexities
group_degrees::Matrix{Int} # degree of edges
end
function partition_state(adj, group, config, log2_sizes)
group1 = [group[i] for (i,c) in enumerate(config) if c==1]
group2 = [group[i] for (i,c) in enumerate(config) if c==2]
group_sizes = [length(group1), length(group2)] # group size in terms of number of vertices.
group_scs = [group_sc(adj, group1, log2_sizes), group_sc(adj, group2, log2_sizes)] # space complexity of each group.
group_degrees = hcat(sum(adj[group1,:]; dims=1)', sum(adj[group2,:]; dims=1)')
loss = compute_loss(group_scs..., group_sizes...)
return PartitionState(config, Ref(loss), group_sizes, group_scs, group_degrees)
end
function bipartite_sc(bipartiter::SABipartite, adj::SparseMatrixCSC, vertices, log2_sizes)
@assert length(vertices) >= 2
degrees_all = sum(adj, dims=1)
adjt = SparseMatrixCSC(adj')
config = _initialize(bipartiter.initializer, adj, vertices, log2_sizes)
if all(config .== 1) || all(config .== 2)
config[1] = 3 - config[1] # flip the first group to avoid empty group
end
best = partition_state(adj, vertices, config, log2_sizes) # this is the `state` of current partition.
for _ = 1:bipartiter.ntrials
config = _initialize(bipartiter.initializer,adj, vertices, log2_sizes)
state = partition_state(adj, vertices, config, log2_sizes) # this is the `state` of current partition.
@inbounds for β in bipartiter.βs, iter = 1:bipartiter.niters
idxi = rand(1:length(vertices))
ti = state.config[idxi]
state.group_sizes[ti] <= 1 && continue
sc_ti, sc_tinew = space_complexity_singlestep_update(state, adjt, degrees_all, log2_sizes, vertices, idxi)
newloss = compute_loss(sc_ti, sc_tinew, state.group_sizes[ti]-1, state.group_sizes[3-ti]+1)
sc_ti0, sc_tinew0 = state.group_scs[ti], state.group_scs[3-ti]
accept = if max(sc_ti0, sc_tinew0) <= bipartiter.sc_target
max(sc_ti, sc_tinew) <= bipartiter.sc_target && (rand() < exp2(β*(state.loss[] - newloss)))
else
rand() < exp2(-β*(max(sc_ti, sc_tinew) - max(sc_ti0, sc_tinew0)))
end
accept && update_state!(state, adjt, vertices, idxi, sc_ti, sc_tinew, newloss)
end
(state.group_sizes[1]==0 || state.group_sizes[2] == 0) && continue
tc, sc1, sc2 = timespace_complexity_singlestep(state.config, adj, vertices, log2_sizes)
@assert state.group_scs ≈ [sc1, sc2] # sanity check
if maximum(state.group_scs) <= max(bipartiter.sc_target, maximum(best.group_scs)) && (maximum(best.group_scs) >= bipartiter.sc_target || state.loss[] < best.loss[])
best = state
end
end
best_tc, = timespace_complexity_singlestep(best.config, adj, vertices, log2_sizes)
@debug "best loss = $(round(best.loss[]; digits=3)) space complexities = $(best.group_scs) time complexity = $(best_tc) groups_sizes = $(best.group_sizes)"
if maximum(best.group_scs) > bipartiter.sc_target
@warn "target space complexity $(bipartiter.sc_target) not found, got: $(maximum(best.group_scs)), with time complexity $best_tc."
end
return vertices[findall(==(1), best.config)], vertices[findall(==(2), best.config)]
end
function timespace_complexity_singlestep(config, adj, group, log2_sizes)
g1 = group[findall(==(1), config)]
g2 = group[findall(==(2), config)]
d1 = sum(adj[g1,:], dims=1)
d2 = sum(adj[g2,:], dims=1)
dall = sum(adj, dims=1)
sc1 = sum(i->(d1[i]!=0 && d1[i]!=dall[i] ? Float64(log2_sizes[i]) : 0.0), 1:size(adj,2))
sc2 = sum(i->(d2[i]!=0 && d2[i]!=dall[i] ? Float64(log2_sizes[i]) : 0.0), 1:size(adj,2))
tc = sum(i->((d2[i]!=0 || d1[i]!=0) && (d2[i]!=dall[i] && d1[i]!=dall[i]) ? Float64(log2_sizes[i]) : 0.0), 1:size(adj,2))
return tc, sc1, sc2
end
function space_complexity_singlestep_update(state, adjt, degrees_all, log2_sizes, group, idxi)
@inbounds begin
vertex = group[idxi]
ti = state.config[idxi]
tinew = 3-ti
δsc_ti = δsc(-, view(state.group_degrees, :, ti), adjt, vertex, degrees_all, log2_sizes)
δsc_tinew = δsc(+, view(state.group_degrees, :, tinew), adjt, vertex, degrees_all, log2_sizes)
sc_ti = state.group_scs[ti] + δsc_ti
sc_tinew = state.group_scs[tinew] + δsc_tinew
end
return sc_ti, sc_tinew
end
@inline function δsc(f, group_degrees, adjt, vertex, degrees_all, log2_sizes)
res = 0.0
@inbounds for k in nzrange(adjt, vertex)
i = adjt.rowval[k]
d0 = group_degrees[i]
D = degrees_all[i]
d = f(d0, adjt.nzval[k])
if d0 == D || d0 == 0 # absent
if d != D && d != 0 # absent
res += Float64(log2_sizes[i])
end
else # not absent
if d == D || d == 0 # absent
res -= Float64(log2_sizes[i])
end
end
end
return res
end
@inline function compute_loss(sc1, sc2, gs1, gs2)
small = min(gs1, gs2)
max(sc1, sc2) / small * (gs1 + gs2)
end
function update_state!(state, adjt, group, idxi, sc_ti, sc_tinew, newloss)
@inbounds begin
ti = state.config[idxi]
tinew = 3-ti
state.group_scs[tinew] = sc_tinew
state.group_scs[ti] = sc_ti
state.config[idxi] = tinew
state.group_sizes[ti] -= 1
state.group_sizes[tinew] += 1
for i = nzrange(adjt,group[idxi])
state.group_degrees[adjt.rowval[i], ti] -= adjt.nzval[i]
state.group_degrees[adjt.rowval[i], tinew] += adjt.nzval[i]
end
state.loss[] = newloss
end
return state
end
_initialize(method, adj, vertices, log2_sizes) = if method == :random
initialize_random(length(vertices))
elseif method == :greedy
initialize_greedy(adj, vertices, log2_sizes)
else
error("initializer not implemented: `$method`")
end
function initialize_greedy(adj, vertices, log2_sizes)
adjt = SparseMatrixCSC(adj')
indegrees = sum(adj[vertices,:], dims=1)
all = sum(adj, dims=1)
openedges = findall(i->all[i] > indegrees[i] > 0, 1:size(adj, 2))
v2e = Dict{Int,Vector{Int}}()
for v=1:size(adj, 1)
v2e[v] = adjt.rowval[nzrange(adjt, v)]
end
incidence_list = IncidenceList(v2e; openedges=openedges)
log2_edge_sizes = Dict([i=>log2_sizes[i] for i=1:length(log2_sizes)])
# nrepeat=3 because there are overheads
tree, _, _ = tree_greedy(incidence_list, log2_edge_sizes; nrepeat=3)
# build configuration from the tree
res = ones(Int, size(adj, 1))
res[get_vertices!(Int[],tree.right)] .= 2
return res[vertices]
end
initialize_random(n::Int) = [rand() < 0.5 ? 1 : 2 for _ = 1:n]
function get_vertices!(out, tree)
if tree isa Integer
push!(out, tree)
else
get_vertices!(out, tree.left)
get_vertices!(out, tree.right)
end
return out
end
# legacy interface
"""
optimize_sa(code, size_dict; sc_target, max_group_size=40, βs=0.1:0.2:15.0, niters=1000, ntrials=50,
sub_optimizer = GreedyMethod(), initializer=:random)
Optimize the einsum `code` contraction order using the Simulated Annealing bipartition + Greedy approach.
`size_dict` is a dictionary that specifies leg dimensions.
Check the docstring of `SABipartite` for detailed explaination of other input arguments.
### References
* [Hyper-optimized tensor network contraction](https://arxiv.org/abs/2002.01935)
"""
function optimize_sa(code::EinCode, size_dict; sc_target, max_group_size=40,
βs=0.01:0.02:15.0, niters=1000, ntrials=50, sub_optimizer=GreedyMethod(),
initializer=:random)
bipartiter = SABipartite(; sc_target=sc_target, βs=βs, niters=niters, ntrials=ntrials,
sub_optimizer=sub_optimizer,
max_group_size=max_group_size, initializer=initializer)
recursive_bipartite_optimize(bipartiter, code, size_dict)
end
| OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 4191 | struct NetworkSimplifier{LT}
operations::Vector{NestedEinsum{LT}}
end
function merge_vectors(code::EinCode{LT}) where LT
ixs = getixsv(code)
mask = trues(length(ixs))
ops = [NestedEinsum{LT}(i) for i=1:length(ixs)]
for i in 1:length(ixs)
if length(ixs[i]) == 1
for j in 1:length(ixs)
if i!=j && mask[j] && ixs[i][1] ∈ ixs[j] # merge i to j
mask[i] = false
ops[j] = NestedEinsum([ops[i], ops[j]],
EinCode([ixs[i], ixs[j]], ixs[j]))
break
end
end
end
end
newcode = EinCode(ixs[mask], getiyv(code))
return NetworkSimplifier(ops[mask]), newcode
end
function merge_greedy(code::EinCode{LT}, size_dict; threshhold=-1e-12) where LT
ixs, iy = getixsv(code), getiyv(code)
log2_edge_sizes = Dict{LT,Float64}()
for (k, v) in size_dict
log2_edge_sizes[k] = log2(v)
end
incidence_list = IncidenceList(Dict([i=>ixs[i] for i=1:length(ixs)]); openedges=iy)
n = nv(incidence_list)
if n == 0
return nothing
elseif n == 1
return collect(vertices(incidence_list))[1]
end
tree = Dict{Int,NestedEinsum}([v=>NestedEinsum{LT}(v) for v in vertices(incidence_list)])
cost_values = evaluate_costs(1.0, incidence_list, log2_edge_sizes)
while true
if length(cost_values) == 0
return _buildsimplifier(tree, incidence_list)
end
v, pair = findmin(cost_values)
if v <= threshhold
_, _, c = contract_pair!(incidence_list, pair..., log2_edge_sizes)
tree[pair[1]] = NestedEinsum([tree[pair[1]], tree[pair[2]]], EinCode([c.first...], c.second))
if nv(incidence_list) <= 1
return _buildsimplifier(tree, incidence_list)
end
update_costs!(cost_values, pair..., 1.0, incidence_list, log2_edge_sizes)
else
return _buildsimplifier(tree, incidence_list)
end
end
end
function _buildsimplifier(tree, incidence_list)
vertices = sort!(collect(keys(incidence_list.v2e)))
ixs = [incidence_list.v2e[v] for v in vertices]
iy = incidence_list.openedges
NetworkSimplifier([tree[v] for v in vertices]), EinCode(ixs, iy)
end
function embed_simplifier(code::NestedEinsum, simplifier)
if isleaf(code)
op = simplifier.operations[code.tensorindex]
return op
else
return NestedEinsum(map(code.args) do arg
embed_simplifier(arg, simplifier)
end, code.eins)
end
end
embed_simplifier(code::SlicedEinsum, simplifier) = SlicedEinsum(code.slicing, embed_simplifier(code.eins, simplifier))
optimize_permute(se::SlicedEinsum, level=0) = SlicedEinsum(se.slicing, se.eins isa EinCode ? se.eins : optimize_permute(se.eins, level))
function optimize_permute(ne::NestedEinsum{LT}, level=0) where LT
if isleaf(ne)
return ne
else
args = NestedEinsum{LT}[optimize_permute(arg, level+1) for arg in ne.args]
ixs0 = getixsv(ne.eins)
ixs = Vector{LT}[isleaf(x) ? ixs0[i] : getiyv(x.eins) for (i, x) in enumerate(args)]
iy = level == 0 ? getiyv(ne.eins) : optimize_output_permute(ixs, getiyv(ne.eins))
return NestedEinsum(args, EinCode(ixs, iy))
end
end
function optimize_output_permute(ixs::AbstractVector{<:AbstractVector{LT}}, iy::AbstractVector{LT}) where LT
if length(ixs) != 2
return iy
else
iA, iB = ixs
batchdim = LT[]
outerA = LT[]
outerB = LT[]
bcastdim = LT[]
for l in iy
if l ∈ iA
if l ∈ iB
push!(batchdim, l)
else
push!(outerA, l)
end
else
if l ∈ iB
push!(outerB, l)
else
push!(bcastdim, l)
end
end
end
return vcat(
sort!(outerA, by=l->findfirst(==(l), iA)),
sort!(outerB, by=l->findfirst(==(l), iB)),
sort!(batchdim, by=l->findfirst(==(l), iA)),
bcastdim)
end
end
| OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 23531 | ################### expression tree ###################
# `ExprInfo` stores the node information.
# * `out_dims` is the output dimensions of this tree/subtree.
# * `tensorid` specifies the tensor index for leaf nodes. It is `-1` is for non-leaf node.
struct ExprInfo
out_dims::Vector{Int}
tensorid::Int
end
ExprInfo(out_dims::Vector{Int}) = ExprInfo(out_dims, -1)
# `ExprTree` is the expression tree for tensor contraction (or contraction tree), it is a binary tree (including leaf nodes without siblings).
# `left` and `right` are left and right branches, they are either both specified (non-leaf) or both unspecified (leaf), see [`isleaf`](@ref) function.
# `ExprTree()` for constructing a leaf node,
# `ExprTree(left, right, info)` for constructing a non-leaf node.
mutable struct ExprTree
left::ExprTree
right::ExprTree
info::ExprInfo
ExprTree(info) = (res = new(); res.info=info; res)
ExprTree(left, right, info) = new(left, right, info)
end
function print_expr(io::IO, expr::ExprTree, level=0)
isleaf(expr) && return print(io, " "^(2*level), labels(expr), " ($(expr.info.tensorid))")
print(io, " "^(2*level), "(\n")
print_expr(io, expr.left, level+1)
print("\n")
print_expr(io, expr.right, level+1)
print("\n")
print(io, " "^(2*level), ") := ", labels(expr))
end
# if `expr` is a leaf, it should have `left` and `right` fields both unspecified.
isleaf(expr::ExprTree) = !isdefined(expr, :left)
Base.show(io::IO, expr::ExprTree) = print_expr(io, expr, 0)
Base.show(io::IO, ::MIME"text/plain", expr::ExprTree) = show(io, expr)
siblings(t::ExprTree) = isleaf(t) ? ExprTree[] : ExprTree[t.left, t.right]
Base.copy(t::ExprTree) = isleaf(t) ? ExprTree(t.info) : ExprTree(copy(t.left), copy(t.right), copy(t.info))
Base.copy(info::ExprInfo) = ExprInfo(copy(info.out_dims), info.tensorid)
# output tensor labels
labels(t::ExprTree) = t.info.out_dims
# find the maximum label recursively, this is a helper function for converting an expression tree back to einsum.
maxlabel(t::ExprTree) = isleaf(t) ? maximum(isempty(labels(t)) ? 0 : labels(t)) : max(isempty(labels(t)) ? 0 : maximum(labels(t)), maxlabel(t.left), maxlabel(t.right))
# comparison between `ExprTree`s, mainly for testing
Base.:(==)(t1::ExprTree, t2::ExprTree) = _equal(t1, t2)
Base.:(==)(t1::ExprInfo, t2::ExprInfo) = _equal(t1.out_dims, t2.out_dims) && t1.tensorid == t2.tensorid
function _equal(t1::ExprTree, t2::ExprTree)
isleaf(t1) != isleaf(t2) && return false
isleaf(t1) ? t1.info == t2.info : _equal(t1.left, t2.left) && _equal(t1.right, t2.right) && t1.info == t2.info
end
_equal(t1::Vector, t2::Vector) = Set(t1) == Set(t2)
############# Slicer ######################
struct Slicer
log2_sizes::Vector{Float64} # the size dict after slicing
legs::Dict{Int,Float64} # sliced leg and its original size
max_size::Int # maximum number of sliced legs
fixed_slices::Vector{Int} # number of fixed legs
end
function Slicer(log2_sizes::AbstractVector{Float64}, max_size::Int, fixed_slices::AbstractVector)
slicer = Slicer(collect(log2_sizes), Dict{Int,Float64}(), max_size, collect(Int,fixed_slices))
for l in fixed_slices
push!(slicer, l)
end
return slicer
end
Base.length(s::Slicer) = length(s.legs)
function Base.replace!(slicer::Slicer, pair::Pair)
worst, best = pair
@assert worst ∉ slicer.fixed_slices
@assert haskey(slicer.legs, worst)
@assert !haskey(slicer.legs, best)
slicer.log2_sizes[worst] = slicer.legs[worst] # restore worst size
slicer.legs[best] = slicer.log2_sizes[best] # add best to legs
slicer.log2_sizes[best] = 0.0
delete!(slicer.legs, worst) # remove worst from legs
return slicer
end
function Base.push!(slicer::Slicer, best)
@assert length(slicer) < slicer.max_size
@assert !haskey(slicer.legs, best)
slicer.legs[best] = slicer.log2_sizes[best] # add best to legs
slicer.log2_sizes[best] = 0.0
return slicer
end
# convert the slicer to a vector of sliced labels
function get_slices(s::Slicer, inverse_map::Dict{Int,LT}) where LT
# we want to keep the order of input fixed slices!
LT[[inverse_map[l] for l in s.fixed_slices]..., [inverse_map[l] for (l, sz) in s.legs if l ∉ s.fixed_slices]...]
end
############# random expression tree ###############
function random_exprtree(code::EinCode)
ixs, iy = getixsv(code), getiyv(code)
labels = _label_dict(ixs, iy)
return random_exprtree([Int[labels[l] for l in ix] for ix in ixs], Int[labels[l] for l in iy], length(labels))
end
function random_exprtree(ixs::Vector{Vector{Int}}, iy::Vector{Int}, nedge::Int)
outercount = zeros(Int, nedge)
allcount = zeros(Int, nedge)
for l in iy
outercount[l] += 1
allcount[l] += 1
end
for ix in ixs
for l in ix
allcount[l] += 1
end
end
_random_exprtree(ixs, collect(1:length(ixs)), outercount, allcount)
end
function _random_exprtree(ixs::Vector{Vector{Int}}, xindices, outercount::Vector{Int}, allcount::Vector{Int})
n = length(ixs)
if n == 1
return ExprTree(ExprInfo(ixs[1], xindices[1]))
end
mask = rand(Bool, n)
if all(mask) || !any(mask) # prevent invalid partition
i = rand(1:n)
mask[i] = ~(mask[i])
end
info = ExprInfo(Int[i for i=1:length(outercount) if outercount[i]!=allcount[i] && outercount[i]!=0])
outercount1, outercount2 = copy(outercount), copy(outercount)
for i=1:n
counter = mask[i] ? outercount2 : outercount1
for l in ixs[i]
counter[l] += 1
end
end
return ExprTree(_random_exprtree(ixs[mask], xindices[mask], outercount1, allcount), _random_exprtree(ixs[(!).(mask)], xindices[(!).(mask)], outercount2, allcount), info)
end
##################### convert a contraction tree back to a nested einsum ####################
NestedEinsum(expr::ExprTree) = _nestedeinsum(expr, 1:maxlabel(expr))
NestedEinsum(expr::ExprTree, labelmap) = _nestedeinsum(expr, labelmap)
function _nestedeinsum(tree::ExprTree, lbs::Union{AbstractVector{LT}, Dict{Int,LT}}) where LT
isleaf(tree) && return NestedEinsum{LT}(tree.info.tensorid)
eins = EinCode([getindex.(Ref(lbs), labels(tree.left)), getindex.(Ref(lbs), labels(tree.right))], getindex.(Ref(lbs), labels(tree)))
NestedEinsum([_nestedeinsum(tree.left, lbs), _nestedeinsum(tree.right, lbs)], eins)
end
##################### The main program ##############################
"""
TreeSA{RT,IT,GM}
TreeSA(; sc_target=20, βs=collect(0.01:0.05:15), ntrials=10, niters=50,
sc_weight=1.0, rw_weight=0.2, initializer=:greedy, greedy_config=GreedyMethod(; nrepeat=1))
Optimize the einsum contraction pattern using the simulated annealing on tensor expression tree.
* `sc_target` is the target space complexity,
* `ntrials`, `βs` and `niters` are annealing parameters, doing `ntrials` indepedent annealings, each has inverse tempteratures specified by `βs`, in each temperature, do `niters` updates of the tree.
* `sc_weight` is the relative importance factor of space complexity in the loss compared with the time complexity.
* `rw_weight` is the relative importance factor of memory read and write in the loss compared with the time complexity.
* `initializer` specifies how to determine the initial configuration, it can be `:greedy` or `:random`. If it is using `:greedy` method to generate the initial configuration, it also uses two extra arguments `greedy_method` and `greedy_nrepeat`.
* `nslices` is the number of sliced legs, default is 0.
* `fixed_slices` is a vector of sliced legs, default is `[]`.
### References
* [Recursive Multi-Tensor Contraction for XEB Verification of Quantum Circuits](https://arxiv.org/abs/2108.05665)
"""
Base.@kwdef struct TreeSA{RT,IT,GM,LT} <: CodeOptimizer
sc_target::RT = 20
βs::IT = 0.01:0.05:15
ntrials::Int = 10
niters::Int = 50
sc_weight::Float64 = 1.0
rw_weight::Float64 = 0.2
initializer::Symbol = :greedy
nslices::Int = 0
fixed_slices::Vector{LT} = []
# configure greedy method
greedy_config::GM = GreedyMethod(nrepeat=1)
end
# this is the main function
"""
optimize_tree(code, size_dict; sc_target=20, βs=0.1:0.1:10, ntrials=2, niters=100, sc_weight=1.0, rw_weight=0.2, initializer=:greedy, greedy_method=MinSpaceOut(), fixed_slices=[])
Optimize the einsum contraction pattern specified by `code`, and edge sizes specified by `size_dict`.
Check the docstring of [`TreeSA`](@ref) for detailed explaination of other input arguments.
"""
function optimize_tree(code::AbstractEinsum, size_dict; nslices::Int=0, sc_target=20, βs=0.1:0.1:10, ntrials=20, niters=100, sc_weight=1.0, rw_weight=0.2, initializer=:greedy, greedy_method=GreedyMethod(nrepeat = 1), fixed_slices=[])
LT = labeltype(code)
if nslices < length(fixed_slices)
@warn("Number of slices: $(nslices) is smaller than the number of fixed slices, setting it to: $(length(fixed_slices)).")
nslices = length(fixed_slices)
end
# get input labels (`getixsv`) and output labels (`getiyv`) in the einsum code.
ixs, iy = getixsv(code), getiyv(code)
ninputs = length(ixs) # number of input tensors
if ninputs <= 2 # number of input tensors ≤ 2, can not be optimized
return SlicedEinsum(LT[], NestedEinsum(NestedEinsum{LT}.(1:ninputs), EinCode(ixs, iy)))
end
###### Stage 1: preprocessing ######
labels = _label_dict(ixs, iy) # map labels to integers
inverse_map = Dict([v=>k for (k,v) in labels]) # the inverse transformation, map integers to labels
log2_sizes = [log2.(size_dict[inverse_map[i]]) for i=1:length(labels)] # use `log2` sizes in computing time
if ntrials <= 0 # no optimization at all, then 1). initialize an expression tree and 2). convert back to nested einsum.
best_tree = _initializetree(code, size_dict, initializer; greedy_method=greedy_method)
return SlicedEinsum(LT[], NestedEinsum(best_tree, inverse_map))
end
###### Stage 2: computing ######
# create vectors to store optimized 1). expression tree, 2). time complexities, 3). space complexities, 4). read-write complexities and 5). slicing information.
trees, tcs, scs, rws, slicers = Vector{ExprTree}(undef, ntrials), zeros(ntrials), zeros(ntrials), zeros(ntrials), Vector{Slicer}(undef, ntrials)
@threads for t = 1:ntrials # multi-threading on different trials, use `JULIA_NUM_THREADS=5 julia xxx.jl` for setting number of threads.
# 1). random/greedy initialize a contraction tree.
tree = _initializetree(code, size_dict, initializer; greedy_method=greedy_method)
# 2). optimize the `tree` and `slicer` in a inplace manner.
slicer = Slicer(log2_sizes, nslices, Int[labels[l] for l in fixed_slices])
optimize_tree_sa!(tree, log2_sizes, slicer; sc_target=sc_target, βs=βs, niters=niters, sc_weight=sc_weight, rw_weight=rw_weight)
# 3). evaluate time-space-readwrite complexities.
tc, sc, rw = tree_timespace_complexity(tree, slicer.log2_sizes)
@debug "trial $t, time complexity = $tc, space complexity = $sc, read-write complexity = $rw."
trees[t], tcs[t], scs[t], rws[t], slicers[t] = tree, tc, sc, rw, slicer
end
###### Stage 3: postprocessing ######
# compare and choose the best solution
best_tree, best_tc, best_sc, best_rw, best_slicer = first(trees), first(tcs), first(scs), first(rws), first(slicers)
for i=2:ntrials
if scs[i] < best_sc || (scs[i] == best_sc && exp2(tcs[i]) + rw_weight * exp2(rws[i]) < exp2(best_tc) + rw_weight * exp2(rws[i]))
best_tree, best_tc, best_sc, best_rw, best_slicer = trees[i], tcs[i], scs[i], rws[i], slicers[i]
end
end
@debug "best space complexities = $best_tc, time complexity = $best_sc, read-write complexity $best_rw."
if best_sc > sc_target
@warn "target space complexity not found, got: $best_sc, with time complexity $best_tc, read-write complexity $best_rw."
end
# returns a sliced einsum we need to map the sliced dimensions back from integers to labels.
return SlicedEinsum(get_slices(best_slicer, inverse_map), NestedEinsum(best_tree, inverse_map))
end
# initialize a contraction tree
function _initializetree(code, size_dict, method; greedy_method)
if method == :greedy
labels = _label_dict(code) # label to int
return _exprtree(optimize_greedy(code, size_dict; α = greedy_method.α, temperature = greedy_method.temperature, nrepeat=greedy_method.nrepeat), labels)
elseif method == :random
return random_exprtree(code)
elseif method == :specified
labels = _label_dict(code) # label to int
return _exprtree(code, labels)
else
throw(ArgumentError("intializier `$method` is not defined!"))
end
end
# use simulated annealing to optimize a contraction tree
function optimize_tree_sa!(tree::ExprTree, log2_sizes, slicer::Slicer; βs, niters, sc_target, sc_weight, rw_weight)
@assert rw_weight >= 0
@assert sc_weight >= 0
log2rw_weight = log2(rw_weight)
for β in βs
@debug begin
tc, sc, rw = tree_timespace_complexity(tree, log2_sizes)
"β = $β, tc = $tc, sc = $sc, rw = $rw"
end
###### Stage 1: add one slice at each temperature ######
if slicer.max_size > length(slicer.fixed_slices) # `max_size` specifies the maximum number of sliced dimensions.
# 1). find legs that reduce the dimension the most
scs, lbs = Float64[], Vector{Int}[]
# space complexities and labels of all intermediate tensors
tensor_sizes!(tree, slicer.log2_sizes, scs, lbs)
# the set of (intermediate) tensor labels that producing maximum space complexity
best_labels = _best_labels(scs, lbs)
# 2). slice the best not sliced label (it must appear in largest tensors)
best_not_sliced_labels = filter(x->!haskey(slicer.legs, x), best_labels)
if !isempty(best_not_sliced_labels)
#best_not_sliced_label = rand(best_not_sliced_labels) # random or best
best_not_sliced_label = best_not_sliced_labels[findmax(l->count(==(l), best_not_sliced_labels), best_not_sliced_labels)[2]]
if length(slicer) < slicer.max_size # if has not reached maximum number of slices, add one slice
push!(slicer, best_not_sliced_label)
else # otherwise replace one slice
legs = [l for l in keys(slicer.legs) if l ∉ slicer.fixed_slices] # only slice over not fixed legs
score = [count(==(l), best_labels) for l in legs]
replace!(slicer, legs[argmin(score)]=>best_not_sliced_label)
end
end
@debug begin
tc, sc, rw = tree_timespace_complexity(tree, log2_sizes)
"after slicing: β = $β, tc = $tc, sc = $sc, rw = $rw"
end
end
###### Stage 2: sweep and optimize the contraction tree for `niters` times ######
for _ = 1:niters
optimize_subtree!(tree, β, slicer.log2_sizes, sc_target, sc_weight, log2rw_weight) # single sweep
end
end
return tree, slicer
end
# here "best" means giving maximum space complexity
function _best_labels(scs, lbs)
max_sc = maximum(scs)
return vcat(lbs[scs .> max_sc-0.99]...)
end
# find tensor sizes and their corresponding labels of all intermediate tensors
function tensor_sizes!(tree::ExprTree, log2_sizes, scs, lbs)
sc = isempty(labels(tree)) ? 0.0 : sum(i->log2_sizes[i], labels(tree))
push!(scs, sc)
push!(lbs, labels(tree))
isleaf(tree) && return
tensor_sizes!(tree.left, log2_sizes, scs, lbs)
tensor_sizes!(tree.right, log2_sizes, scs, lbs)
end
# the time-space-readwrite complexity of a contraction tree
function tree_timespace_complexity(tree::ExprTree, log2_sizes)
isleaf(tree) && return (-Inf, isempty(labels(tree)) ? 0.0 : sum(i->log2_sizes[i], labels(tree)), -Inf)
tcl, scl, rwl = tree_timespace_complexity(tree.left, log2_sizes)
tcr, scr, rwr = tree_timespace_complexity(tree.right, log2_sizes)
tc, sc, rw = tcscrw(labels(tree.left), labels(tree.right), labels(tree), log2_sizes, true)
return (fast_log2sumexp2(tc, tcl, tcr), max(sc, scl, scr), fast_log2sumexp2(rw, rwl, rwr))
end
# returns time complexity, space complexity and read-write complexity (0 if `compute_rw` is false)
# `ix1` and `ix2` are vectors of labels for the first and second input tensors.
# `iy` is a vector of labels for the output tensors.
# `log2_sizes` is the log2 size of labels (note labels are integers, we do not need dict to index label sizes).\
@inline function tcscrw(ix1, ix2, iy, log2_sizes::Vector{T}, compute_rw) where T
l1, l2, l3 = ix1, ix2, iy
sc1 = (!compute_rw || isempty(l1)) ? zero(T) : sum(i->(@inbounds log2_sizes[i]), l1)
sc2 = (!compute_rw || isempty(l2)) ? zero(T) : sum(i->(@inbounds log2_sizes[i]), l2)
sc = isempty(l3) ? zero(T) : sum(i->(@inbounds log2_sizes[i]), l3)
tc = sc
# Note: assuming labels in `l1` being unique
@inbounds for l in l1
if l ∈ l2 && l ∉ l3
tc += log2_sizes[l]
end
end
rw = compute_rw ? fast_log2sumexp2(sc, sc1, sc2) : 0.0
return tc, sc, rw
end
# optimize a contraction tree recursively
function optimize_subtree!(tree, β, log2_sizes, sc_target, sc_weight, log2rw_weight)
# find appliable local rules, at most 4 rules can be applied.
# Sometimes, not all rules are applicable because either left or right sibling do not have siblings.
rst = ruleset(tree)
if !isempty(rst)
# propose a random update rule, TODO: can we have a better selector?
rule = rand(rst)
optimize_rw = log2rw_weight != -Inf
# difference in time, space and read-write complexity if the selected rule is applied
tc0, tc1, dsc, rw0, rw1, subout = tcsc_diff(tree, rule, log2_sizes, optimize_rw)
dtc = optimize_rw ? fast_log2sumexp2(tc1, log2rw_weight + rw1) - fast_log2sumexp2(tc0, log2rw_weight + rw0) : tc1 - tc0
sc = _sc(tree, rule, log2_sizes) # current space complexity
# update the loss function
dE = (max(sc, sc+dsc) > sc_target ? sc_weight : 0) * dsc + dtc
if rand() < exp(-β*dE) # ACCEPT
update_tree!(tree, rule, subout)
end
for subtree in siblings(tree) # RECURSE
optimize_subtree!(subtree, β, log2_sizes, sc_target, sc_weight, log2rw_weight)
end
end
end
# if rule ∈ [1, 2], left sibling will be updated, otherwise, right sibling will be updated.
# we need to compute space complexity for current node and the updated sibling, and return the larger one.
_sc(tree, rule, log2_sizes) = max(__sc(tree, log2_sizes), __sc((rule == 1 || rule == 2) ? tree.left : tree.right, log2_sizes))
__sc(tree, log2_sizes) = length(labels(tree))==0 ? 0.0 : sum(l->log2_sizes[l], labels(tree)) # space complexity of current node
@inline function ruleset(tree::ExprTree)
if isleaf(tree) || (isleaf(tree.left) && isleaf(tree.right))
return 1:0
elseif isleaf(tree.right)
return 1:2
elseif isleaf(tree.left)
return 3:4
else
return 1:4
end
end
function tcsc_diff(tree::ExprTree, rule, log2_sizes, optimize_rw)
if rule == 1 # (a,b), c -> (a,c),b
return abcacb(labels(tree.left.left), labels(tree.left.right), labels(tree.left), labels(tree.right), labels(tree), log2_sizes, optimize_rw)
elseif rule == 2 # (a,b), c -> (c,b),a
return abcacb(labels(tree.left.right), labels(tree.left.left), labels(tree.left), labels(tree.right), labels(tree), log2_sizes, optimize_rw)
elseif rule == 3 # a,(b,c) -> b,(a,c)
return abcacb(labels(tree.right.right), labels(tree.right.left), labels(tree.right), labels(tree.left), labels(tree), log2_sizes, optimize_rw)
else # a,(b,c) -> c,(b,a)
return abcacb(labels(tree.right.left), labels(tree.right.right), labels(tree.right), labels(tree.left), labels(tree), log2_sizes, optimize_rw)
end
end
# compute the time complexity, space complexity and read-write complexity information for the contraction update rule "((a,b),c) -> ((a,c),b)"
function abcacb(a, b, ab, c, d, log2_sizes, optimize_rw)
tc0, sc0, rw0 = _tcsc_merge(a, b, ab, c, d, log2_sizes, optimize_rw)
ac = Int[] # labels for contraction result of (a, c)
for l in a
if l ∈ b || l ∈ d # suppose no repeated indices
push!(ac, l)
end
end
for l in c
if l ∉ a && (l ∈ b || l ∈ d) # suppose no repeated indices
push!(ac, l)
end
end
tc1, sc1, rw1 = _tcsc_merge(a, c, ac, b, d, log2_sizes, optimize_rw)
return tc0, tc1, sc1-sc0, rw0, rw1, ac
end
# compute complexity for a two-step contraction: (a, b) -> ab, (ab, c) -> d
function _tcsc_merge(a, b, ab, c, d, log2_sizes, optimize_rw)
tcl, scl, rwl = tcscrw(a, b, ab, log2_sizes, optimize_rw) # this is correct
tcr, scr, rwr = tcscrw(ab, c, d, log2_sizes, optimize_rw)
fast_log2sumexp2(tcl, tcr), max(scl, scr), (optimize_rw ? fast_log2sumexp2(rwl, rwr) : 0.0)
end
# apply the update rule
function update_tree!(tree::ExprTree, rule::Int, subout)
if rule == 1 # (a,b), c -> (a,c),b
b, c = tree.left.right, tree.right
tree.left.right = c
tree.right = b
tree.left.info = ExprInfo(subout)
elseif rule == 2 # (a,b), c -> (c,b),a
a, c = tree.left.left, tree.right
tree.left.left = c
tree.right = a
tree.left.info = ExprInfo(subout)
elseif rule == 3 # a,(b,c) -> b,(a,c)
a, b = tree.left, tree.right.left
tree.left = b
tree.right.left = a
tree.right.info = ExprInfo(subout)
else # a,(b,c) -> c,(b,a)
a, c = tree.left, tree.right.right
tree.left = c
tree.right.right = a
tree.right.info = ExprInfo(subout)
end
return tree
end
# map labels to integers.
_label_dict(code) = _label_dict(getixsv(code), getiyv(code))
function _label_dict(ixsv::AbstractVector{<:AbstractVector{LT}}, iyv::AbstractVector{LT}) where LT
v = unique(vcat(ixsv..., iyv))
labels = Dict{LT,Int}(zip(v, 1:length(v)))
return labels
end
# construct the contraction tree recursively from a nested einsum.
function ExprTree(code::NestedEinsum)
_exprtree(code, _label_dict(code))
end
function _exprtree(code::NestedEinsum, labels)
@assert length(code.args) == 2 "einsum contraction not in the binary form, got number of arguments: $(length(code.args))"
ExprTree(map(enumerate(code.args)) do (i,arg)
if isleaf(arg) # leaf nodes
ExprTree(ExprInfo(getindex.(Ref(labels), getixsv(code.eins)[i]), arg.tensorindex))
else
res = _exprtree(arg, labels)
end
end..., ExprInfo(Int[labels[i] for i=getiyv(code.eins)]))
end
@inline function fast_log2sumexp2(a, b)
mm, ms = minmax(a, b)
return log2(exp2(mm - ms) + 1) + ms
end
@inline function fast_log2sumexp2(a, b, c)
if a > b
if a > c
m1, m2, ms = b, c, a
else
m1, m2, ms = a, b, c
end
else
if b > c
m1, m2, ms = c, a, b
else
m1, m2, ms = b, a, c
end
end
return Base.FastMath.log2(Base.FastMath.exp2(m1 - ms) + Base.FastMath.exp2(m2 - ms) + 1) + ms
end
| OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 7497 | """
struct ExactTreewidth{GM} <: CodeOptimizer
ExactTreewidth(greedy_config::GM = GreedyMethod(nrepeat=1))
A optimizer using the exact tree width solver proved in TreeWidthSolver.jl, the greedy_config is the configuration for the greedy method, which is used to solve the subproblems in the tree decomposition.
# Fields
- `greedy_config::GM`: The configuration for the greedy method.
"""
Base.@kwdef struct ExactTreewidth{GM} <: CodeOptimizer
greedy_config::GM = GreedyMethod(nrepeat=1)
end
"""
exact_treewidth_method(incidence_list::IncidenceList{VT,ET}, log2_edge_sizes; α::TA = 0.0, temperature::TT = 0.0, nrepeat=1) where {VT,ET,TA,TT}
This function calculates the exact treewidth of a graph using TreeWidthSolver.jl. It takes an incidence list representation of the graph (`incidence_list`) and a dictionary of logarithm base 2 edge sizes (`log2_edge_sizes`) as input. The function also accepts optional parameters `α`, `temperature`, and `nrepeat` with default values of 0.0, 0.0, and 1 respectively, which are parameter of the GreedyMethod used in the contraction process as a sub optimizer.
## Arguments
- `incidence_list`: An incidence list representation of the graph.
- `log2_edge_sizes`: A dictionary of logarithm base 2 edge sizes.
## Returns
- The function returns a `ContractionTree` representing the contraction process.
```
julia> optimizer = OMEinsumContractionOrders.ExactTreewidth()
OMEinsumContractionOrders.ExactTreewidth{GreedyMethod{Float64, Float64}}(GreedyMethod{Float64, Float64}(0.0, 0.0, 1))
julia> eincode = OMEinsumContractionOrders.EinCode([['a', 'b'], ['a', 'c', 'd'], ['b', 'c', 'e', 'f'], ['e'], ['d', 'f']], ['a'])
ab, acd, bcef, e, df -> a
julia> size_dict = Dict([c=>(1<<i) for (i,c) in enumerate(['a', 'b', 'c', 'd', 'e', 'f'])]...)
Dict{Char, Int64} with 6 entries:
'f' => 64
'a' => 2
'c' => 8
'd' => 16
'e' => 32
'b' => 4
julia> optcode = optimize_code(eincode, size_dict, optimizer)
ab, ab -> a
├─ ab
└─ fac, bcf -> ab
├─ df, acd -> fac
│ ├─ df
│ └─ acd
└─ e, bcef -> bcf
├─ e
└─ bcef
```
"""
function exact_treewidth_method(incidence_list::IncidenceList{VT,ET}, log2_edge_sizes; α::TA = 0.0, temperature::TT = 0.0, nrepeat=1) where {VT,ET,TA,TT}
indicies = collect(keys(incidence_list.e2v))
tensors = collect(keys(incidence_list.v2e))
weights = [log2_edge_sizes[e] for e in indicies]
line_graph = il2lg(incidence_list, indicies)
scalars = [i for i in tensors if isempty(incidence_list.v2e[i])]
contraction_trees = Vector{Union{ContractionTree, VT}}()
# avoid the case that the line graph is not connected
for vertice_ids in connected_components(line_graph)
lg = induced_subgraph(line_graph, vertice_ids)[1]
lg_indicies = indicies[vertice_ids]
lg_weights = weights[vertice_ids]
eo = elimination_order(lg, labels = lg_indicies, weights = lg_weights)
contraction_tree = eo2ct(eo, incidence_list, log2_edge_sizes, α, temperature, nrepeat)
push!(contraction_trees, contraction_tree)
end
# add the scalars back to the contraction tree
return reduce((x,y) -> ContractionTree(x, y), contraction_trees ∪ scalars)
end
# transform incidence list to line graph
function il2lg(incidence_list::IncidenceList{VT, ET}, indicies::Vector{ET}) where {VT, ET}
line_graph = SimpleGraph(length(indicies))
for (i, e) in enumerate(indicies)
for v in incidence_list.e2v[e]
for ej in incidence_list.v2e[v]
if e != ej add_edge!(line_graph, i, findfirst(==(ej), indicies)) end
end
end
end
return line_graph
end
# transform elimination order to contraction tree
function eo2ct(elimination_order::Vector{Vector{TL}}, incidence_list::IncidenceList{VT, ET}, log2_edge_sizes, α::TA, temperature::TT, nrepeat) where {TL, VT, ET, TA, TT}
eo = copy(elimination_order)
incidence_list = copy(incidence_list)
contraction_tree_nodes = Vector{Union{VT, ContractionTree}}(collect(keys(incidence_list.v2e)))
tensors_list = Dict{VT, Int}()
for (i, v) in enumerate(contraction_tree_nodes)
tensors_list[v] = i
end
flag = contraction_tree_nodes[1]
while !isempty(eo)
eliminated_vertices = pop!(eo) # e is a vector of vertices, which are eliminated at the same time
vs = unique!(vcat([incidence_list.e2v[ei] for ei in eliminated_vertices if haskey(incidence_list.e2v, ei)]...)) # the tensors to be contracted, since they are connected to the eliminated vertices
if length(vs) >= 2
sub_list_indices = unique!(vcat([incidence_list.v2e[v] for v in vs]...)) # the vertices connected to the tensors to be contracted
sub_list_open_indices = setdiff(sub_list_indices, eliminated_vertices) # the vertices connected to the tensors to be contracted but not eliminated
sub_list = IncidenceList(Dict([v => incidence_list.v2e[v] for v in vs]); openedges=sub_list_open_indices) # the subgraph of the contracted tensors
sub_tree, scs, tcs = tree_greedy(sub_list, log2_edge_sizes; nrepeat=nrepeat, α=α, temperature=temperature) # optmize the subgraph with greedy method
vi = contract_tree!(incidence_list, sub_tree, log2_edge_sizes, scs, tcs) # insert the contracted tensors back to the total graph
contraction_tree_nodes[tensors_list[vi]] = st2ct(sub_tree, tensors_list, contraction_tree_nodes)
flag = vi
end
end
return contraction_tree_nodes[tensors_list[flag]]
end
function st2ct(sub_tree::Union{ContractionTree, VT}, tensors_list::Dict{VT, Int}, contraction_tree_nodes::Vector{Union{ContractionTree, VT}}) where{VT}
if sub_tree isa ContractionTree
return ContractionTree(st2ct(sub_tree.left, tensors_list, contraction_tree_nodes), st2ct(sub_tree.right, tensors_list, contraction_tree_nodes))
else
return contraction_tree_nodes[tensors_list[sub_tree]]
end
end
"""
optimize_exact_treewidth(optimizer, eincode, size_dict)
Optimizing the contraction order via solve the exact tree width of the line graph corresponding to the eincode and return a `NestedEinsum` object.
Check the docstring of `exact_treewidth_method` for detailed explaination of other input arguments.
"""
function optimize_exact_treewidth(optimizer::ExactTreewidth{GM}, code::EinCode{L}, size_dict::Dict) where {L,GM}
optimize_exact_treewidth(optimizer, getixsv(code), getiyv(code), size_dict)
end
function optimize_exact_treewidth(optimizer::ExactTreewidth{GM}, ixs::AbstractVector{<:AbstractVector}, iy::AbstractVector, size_dict::Dict{L,TI}) where {L, TI, GM}
if length(ixs) <= 2
return NestedEinsum(NestedEinsum{L}.(1:length(ixs)), EinCode(ixs, iy))
end
log2_edge_sizes = Dict{L,Float64}()
for (k, v) in size_dict
log2_edge_sizes[k] = log2(v)
end
# complete all open edges as a clique, connected with a dummy tensor
incidence_list = IncidenceList(Dict([i=>ixs[i] for i=1:length(ixs)] ∪ [(length(ixs) + 1 => iy)]))
α = optimizer.greedy_config.α
temperature = optimizer.greedy_config.temperature
nrepeat = optimizer.greedy_config.nrepeat
tree = exact_treewidth_method(incidence_list, log2_edge_sizes; α = α, temperature = temperature, nrepeat=nrepeat)
# remove the dummy tensor added for open edges
optcode = parse_eincode!(incidence_list, tree, 1:length(ixs) + 1)[2]
return pivot_tree(optcode, length(ixs) + 1)
end | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 95 | function log2sumexp2(s)
ms = maximum(s)
return log2(sum(x->exp2(x - ms), s)) + ms
end
| OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 658 | function ein2hypergraph(args...; kwargs...)
throw(ArgumentError("Extension `LuxorTensorPlot` not loaeded, please load it first by `using LuxorGraphPlot`."))
end
function ein2elimination(args...; kwargs...)
throw(ArgumentError("Extension `LuxorTensorPlot` not loaeded, please load it first by `using LuxorGraphPlot`."))
end
function viz_eins(args...; kwargs...)
throw(ArgumentError("Extension `LuxorTensorPlot` not loaeded, please load it first by `using LuxorGraphPlot`."))
end
function viz_contraction(args...; kwargs...)
throw(ArgumentError("Extension `LuxorTensorPlot` not loaeded, please load it first by `using LuxorGraphPlot`."))
end | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 1279 | using OMEinsumContractionOrders, OMEinsum
using OMEinsumContractionOrders: pivot_tree, path_to_tensor
using Test
@testset "tree reformulate" begin
eincode = ein"((ik, jkl), ij), (lm, m) -> "
code = OMEinsum.rawcode(eincode)
size_dict = Dict([c=>(1<<i) for (i,c) in enumerate(['i', 'j', 'k', 'l', 'm'])]...)
tensor_labels = [['i', 'k'], ['j', 'k', 'l'], ['i', 'j'], ['l', 'm'], ['m']]
size_tensors = [log2(prod(size_dict[l] for l in tensor)) for tensor in tensor_labels]
tensors = [rand([size_dict[j] for j in tensor_labels[i]]...) for i in 1:5]
for tensor_index in 1:5
path = path_to_tensor(code, tensor_index)
tensor = reduce((x, y) -> x.args[y], path, init = code)
@test tensor.tensorindex == tensor_index
new_code = pivot_tree(code, tensor_index)
@test contraction_complexity(new_code, size_dict).sc == max(contraction_complexity(code, size_dict).sc, size_tensors[tensor_index])
closed_code = OMEinsumContractionOrders.NestedEinsum([new_code, tensor], OMEinsumContractionOrders.EinCode([OMEinsumContractionOrders.getiyv(new_code), tensor_labels[tensor_index]], Char[]))
new_eincode = OMEinsum.decorate(closed_code)
@test eincode(tensors...) ≈ new_eincode(tensors...)
end
end | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 5302 | using OMEinsumContractionOrders
using OMEinsumContractionOrders: analyze_contraction, contract_pair!, evaluate_costs, contract_tree!, log2sumexp2, parse_tree
using OMEinsumContractionOrders: IncidenceList, analyze_contraction, LegInfo, tree_greedy, parse_eincode, optimize_greedy
using TropicalNumbers
using Test, Random
@testset "analyze contraction" begin
incidence_list = IncidenceList(Dict('A' => ['a', 'b', 'k', 'o', 'f'], 'B'=>['a', 'c', 'd', 'm', 'f'], 'C'=>['b', 'c', 'e', 'f'], 'D'=>['e'], 'E'=>['d', 'f']), openedges=['c', 'f', 'o'])
info = analyze_contraction(incidence_list, 'A', 'B')
@test Set(info.l1) == Set(['k'])
@test Set(info.l2) == Set(['m'])
@test Set(info.l12) == Set(['a'])
@test Set(info.l01) == Set(['b','o'])
@test Set(info.l02) == Set(['c', 'd'])
@test Set(info.l012) == Set(['f'])
end
@testset "tree greedy" begin
Random.seed!(2)
incidence_list = IncidenceList(Dict('A' => ['a', 'b'], 'B'=>['a', 'c', 'd'], 'C'=>['b', 'c', 'e', 'f'], 'D'=>['e'], 'E'=>['d', 'f']))
log2_edge_sizes = Dict([c=>i for (i,c) in enumerate(['a', 'b', 'c', 'd', 'e', 'f'])]...)
edge_sizes = Dict([c=>(1<<i) for (i,c) in enumerate(['a', 'b', 'c', 'd', 'e', 'f'])]...)
il = copy(incidence_list)
contract_pair!(il, 'A', 'B', log2_edge_sizes)
target = IncidenceList(Dict('A' => ['b', 'c', 'd'], 'C'=>['b', 'c', 'e', 'f'], 'D'=>['e'], 'E'=>['d', 'f']))
@test il.v2e == target.v2e
@test length(target.e2v) == length(il.e2v)
for (k,v) in il.e2v
@test sort(target.e2v[k]) == sort(v)
end
costs = evaluate_costs(0.0, incidence_list, log2_edge_sizes)
@test costs == Dict(('A', 'B')=>(2^9), ('A', 'C')=>2^15, ('B','C')=>2^18, ('B','E')=>2^10, ('C','D')=>2^11, ('C', 'E')=>2^14)
tree, log2_tcs, log2_scs = tree_greedy(incidence_list, log2_edge_sizes)
tcs_, scs_ = [], []
contract_tree!(copy(incidence_list), tree, log2_edge_sizes, tcs_, scs_)
@test all((log2sumexp2(tcs_), maximum(scs_)) .<= (log2(exp2(10)+exp2(16)+exp2(15)+exp2(9)), 11))
vertices = ['A', 'B', 'C', 'D', 'E']
optcode1 = parse_eincode(incidence_list, tree, vertices=vertices)
@test optcode1 isa OMEinsumContractionOrders.NestedEinsum
tree2 = parse_tree(optcode1, vertices)
@test tree2 == tree
eincode = OMEinsumContractionOrders.EinCode([['a', 'b'], ['a', 'c', 'd'], ['b', 'c', 'e', 'f'], ['e'], ['d', 'f']], Vector{Char}())
size_dict = Dict([c=>(1<<i) for (i,c) in enumerate(['a', 'b', 'c', 'd', 'e', 'f'])]...)
Random.seed!(2)
optcode2 = optimize_greedy(eincode, size_dict)
cc = contraction_complexity(optcode2, edge_sizes)
# test flop
@test cc.tc ≈ log2(flop(optcode2, edge_sizes))
@test flop(OMEinsumContractionOrders.EinCode([['i']], Vector{Char}()), Dict('i'=>4)) == 4
@test 16 <= cc.tc <= log2(exp2(10)+exp2(16)+exp2(15)+exp2(9))
@test cc.sc == 11
@test optcode1 == optcode2
end
@testset "fullerene" begin
Random.seed!(123)
function fullerene()
φ = (1+√5)/2
res = NTuple{3,Float64}[]
for (x, y, z) in ((0.0, 1.0, 3φ), (1.0, 2 + φ, 2φ), (φ, 2.0, 2φ + 1.0))
for (α, β, γ) in ((x,y,z), (y,z,x), (z,x,y))
for loc in ((α,β,γ), (α,β,-γ), (α,-β,γ), (α,-β,-γ), (-α,β,γ), (-α,β,-γ), (-α,-β,γ), (-α,-β,-γ))
if loc ∉ res
push!(res, loc)
end
end
end
end
return res
end
# flatten nested einsum
function _flatten(code::OMEinsumContractionOrders.NestedEinsum, iy=nothing)
isleaf(code) && return [tensorindex(code)=>iy]
sibs = siblings(code)
ixs = []
for i=1:length(sibs)
append!(ixs, _flatten(sibs[i], (rootcode(code).ixs)[i]))
end
return ixs
end
flatten(code::OMEinsumContractionOrders.EinCode) = code
function flatten(code::OMEinsumContractionOrders.NestedEinsum{LT}) where LT
ixd = Dict(_flatten(code))
OMEinsumContractionOrders.EinCode([ixd[i] for i=1:length(ixd)], collect((code.eins).iy))
end
isleaf(ne::OMEinsumContractionOrders.NestedEinsum) = ne.tensorindex != -1
siblings(ne::OMEinsumContractionOrders.NestedEinsum) = ne.args
tensorindex(ne::OMEinsumContractionOrders.NestedEinsum) = ne.tensorindex
rootcode(ne::OMEinsumContractionOrders.NestedEinsum) = ne.eins
c60_xy = fullerene()
c60_edges = [[i,j] for (i,(i2,j2,k2)) in enumerate(c60_xy), (j,(i1,j1,k1)) in enumerate(c60_xy) if i<j && (i2-i1)^2+(j2-j1)^2+(k2-k1)^2 < 5.0]
code = OMEinsumContractionOrders.EinCode(vcat(c60_edges, [[i] for i=1:60]), Vector{Int}())
size_dict = Dict([i=>2 for i in 1:60])
log2_edge_sizes = Dict([i=>1 for i in 1:60])
edge_sizes = Dict([i=>2 for i in 1:60])
cc = contraction_complexity(code, edge_sizes)
@test cc.tc == 60
@test cc.sc == 0
optcode = optimize_greedy(code, size_dict)
cc2 = contraction_complexity(optcode, edge_sizes)
@test cc2.sc == 10
@test flatten(optcode) == code
@test flatten(code) == code
optcode_hyper = optimize_greedy(code, size_dict, α = 0.0, temperature = 100.0, nrepeat = 20)
cc3 = contraction_complexity(optcode_hyper, edge_sizes)
@test cc3.sc <= 12
@test flatten(optcode_hyper) == code
end | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 5673 | using OMEinsum, OMEinsumContractionOrders
using Test, Random, Graphs
using KaHyPar
using OMEinsum
@testset "interface" begin
function random_regular_eincode(n, k)
g = Graphs.random_regular_graph(n, k)
ixs = [minmax(e.src,e.dst) for e in Graphs.edges(g)]
return EinCode((ixs..., [(i,) for i in Graphs.vertices(g)]...), ())
end
Random.seed!(2)
g = random_regular_graph(100, 3)
code = random_regular_eincode(100, 3)
xs = [[randn(2,2) for i=1:150]..., [randn(2) for i=1:100]...]
results = Float64[]
for optimizer in [TreeSA(ntrials=1), TreeSA(ntrials=1, nslices=5), GreedyMethod(), SABipartite(sc_target=18, ntrials=1)]
for simplifier in (nothing, MergeVectors(), MergeGreedy())
@info "optimizer = $(optimizer), simplifier = $(simplifier)"
res = optimize_code(code,uniformsize(code, 2), optimizer, simplifier)
tc, sc = OMEinsum.timespace_complexity(res, uniformsize(code, 2))
@test sc <= 18
push!(results, res(xs...)[])
end
end
if isdefined(Base, :get_extension)
optimizer = KaHyParBipartite(sc_target=18)
for simplifier in (nothing, MergeVectors(), MergeGreedy())
@info "optimizer = $(optimizer), simplifier = $(simplifier)"
res = optimize_code(code,uniformsize(code, 2), optimizer, simplifier)
tc, sc = OMEinsum.timespace_complexity(res, uniformsize(code, 2))
@test sc <= 18
push!(results, res(xs...)[])
end
end
for i=1:length(results)-1
@test results[i] ≈ results[i+1]
end
small_code = random_regular_eincode(10, 3)
xs = [[randn(2,2) for i=1:15]..., [randn(2) for i=1:10]...]
results = Float64[]
for optimizer in [TreeSA(ntrials=1), TreeSA(ntrials=1, nslices=5), GreedyMethod(), SABipartite(sc_target=18, ntrials=1), ExactTreewidth()]
for simplifier in (nothing, MergeVectors(), MergeGreedy())
@info "optimizer = $(optimizer), simplifier = $(simplifier)"
res = optimize_code(small_code,uniformsize(small_code, 2), optimizer, simplifier)
tc, sc = OMEinsum.timespace_complexity(res, uniformsize(small_code, 2))
push!(results, res(xs...)[])
end
end
if isdefined(Base, :get_extension)
optimizer = KaHyParBipartite(sc_target=18)
for simplifier in (nothing, MergeVectors(), MergeGreedy())
@info "optimizer = $(optimizer), simplifier = $(simplifier)"
res = optimize_code(small_code,uniformsize(small_code, 2), optimizer, simplifier)
tc, sc = OMEinsum.timespace_complexity(res, uniformsize(small_code, 2))
push!(results, res(xs...)[])
end
end
for i=1:length(results)-1
@test results[i] ≈ results[i+1]
end
end
@testset "corner case: smaller contraction orders" begin
code = ein"i->"
sizes = uniformsize(code, 2)
sne = StaticNestedEinsum((StaticNestedEinsum{Char}(1),), code)
dne = DynamicNestedEinsum((DynamicNestedEinsum{Char}(1),), DynamicEinCode(code))
@test optimize_code(code, sizes, GreedyMethod()) == sne
@test optimize_code(code, sizes, TreeSA()) == SlicedEinsum(Char[], dne)
@test optimize_code(code, sizes, TreeSA(nslices=2)) == SlicedEinsum(Char[], dne)
isdefined(Base, :get_extension) && (@test optimize_code(code, sizes, KaHyParBipartite(sc_target=25)) == dne)
@test optimize_code(code, sizes, SABipartite(sc_target=25)) == dne
end
@testset "peak memory" begin
Random.seed!(2)
code = ein"(ab,a),ac->bc"
@test peak_memory(code, uniformsize(code, 5)) == 75
function random_regular_eincode(n, k)
g = Graphs.random_regular_graph(n, k)
ixs = [minmax(e.src,e.dst) for e in Graphs.edges(g)]
return EinCode((ixs..., [(i,) for i in Graphs.vertices(g)]...), ())
end
code = random_regular_eincode(50, 3)
@test peak_memory(code, uniformsize(code, 5)) == (25 * 75 + 5 * 50)
code1 = optimize_code(code, uniformsize(code, 5), GreedyMethod())
pm1 = peak_memory(code1, uniformsize(code, 5))
tc1, sc1, rw1 = timespacereadwrite_complexity(code1, uniformsize(code, 5))
code2 = optimize_code(code, uniformsize(code, 5), TreeSA(ntrials=1, nslices=5))
pm2 = peak_memory(code2, uniformsize(code, 5))
tc2, sc2, rw2 = timespacereadwrite_complexity(code2, uniformsize(code, 5))
@test 10 * 2^sc1 > pm1 > 2^sc1
@test 10 * 2^sc2 > pm2 > 2^sc2
end
if isdefined(Base, :get_extension)
@testset "kahypar regression test" begin
code = ein"i->"
optcode = optimize_code(code, Dict('i'=>4), KaHyParBipartite(; sc_target=10, max_group_size=10))
@test optcode isa NestedEinsum
x = randn(4)
@test optcode(x) ≈ code(x)
code = ein"i,j->"
optcode = optimize_code(code, Dict('i'=>4, 'j'=>4), KaHyParBipartite(; sc_target=10, max_group_size=10))
@test optcode isa NestedEinsum
x = randn(4)
y = randn(4)
@test optcode(x, y) ≈ code(x, y)
code = ein"ij,jk,kl->ijl"
println(code)
optcode = optimize_code(code, Dict('i'=>4, 'j'=>4, 'k'=>4, 'l'=>4), KaHyParBipartite(; sc_target=4, max_group_size=2))
println(optcode)
@test optcode isa NestedEinsum
a, b, c = [rand(4,4) for i=1:4]
@test optcode(a, b, c) ≈ code(a, b, c)
code = ein"ij,jk,kl->ijl"
optcode = optimize_code(code, Dict('i'=>3, 'j'=>3, 'k'=>3, 'l'=>3), KaHyParBipartite(; sc_target=4, max_group_size=2))
@test optcode isa NestedEinsum
a, b, c = [rand(3,3) for i=1:4]
@test optcode(a, b, c) ≈ code(a, b, c)
end
end | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 633 | using Test, OMEinsumContractionOrders
@testset "save load" begin
for code in [
OMEinsumContractionOrders.EinCode([[1,2], [2,3], [3,4]], [1,4]),
OMEinsumContractionOrders.EinCode([['a','b'], ['b','c'], ['c','d']], ['a','d'])
]
for optcode in [optimize_code(code, uniformsize(code, 2), GreedyMethod()),
optimize_code(code, uniformsize(code, 2), TreeSA(nslices=1))]
filename = tempname()
OMEinsumContractionOrders.writejson(filename, optcode)
code2 = OMEinsumContractionOrders.readjson(filename)
@test optcode == code2
end
end
end
| OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 4667 | using Graphs
using Test, Random
using SparseArrays
using OMEinsumContractionOrders
using OMEinsumContractionOrders: get_coarse_grained_graph, _connected_components, bipartite_sc, group_sc, coarse_grained_optimize,
map_tree_to_parts, ContractionTree, optimize_greedy, optimize_kahypar, optimize_kahypar_auto
using KaHyPar
using OMEinsum: decorate
@testset "graph coarse graining" begin
Random.seed!(2)
adj = zeros(5, 6)
for ind in [[1,1], [1,3], [2,2], [3,2], [4,4], [4,5], [4,3], [5, 4]]
adj[ind...] = 1
end
parts = [[1,3], [2], [4]]
incidence_list = get_coarse_grained_graph(sparse(adj), parts)
@test incidence_list.v2e[1] == [1,2,3]
@test incidence_list.v2e[2] == [2]
@test incidence_list.v2e[3] == [3,4,5]
@test incidence_list.openedges == [4]
@test length(_connected_components(adj, parts[1])) == 2
res = coarse_grained_optimize(adj, parts, ones(6), GreedyMethod())
@test res == ContractionTree(ContractionTree(1,2), 3)
@test map_tree_to_parts(res, [[[1,2], 3], [7,6], [9, [4,1]]]) == [[[[1,2], 3], [7,6]], [9, [4,1]]]
end
@testset "kahypar" begin
Random.seed!(2)
function random_regular_eincode(n, k)
g = Graphs.random_regular_graph(n, k)
ixs = [[minmax(e.src,e.dst)...] for e in Graphs.edges(g)]
return OMEinsumContractionOrders.EinCode([ixs..., [[i] for i in Graphs.vertices(g)]...], Int[])
end
function random_regular_open_eincode(n, k, m)
g = Graphs.random_regular_graph(n, k)
ixs = [[minmax(e.src,e.dst)...] for e in Graphs.edges(g)]
iy = Int[]
while length(iy) < m
v = rand(1:n)
if !(v in iy)
push!(iy, v)
end
end
sort!(iy)
return OMEinsumContractionOrders.EinCode([ixs..., [[i] for i in Graphs.vertices(g)]...], iy)
end
g = random_regular_graph(220, 3)
rows = Int[]
cols = Int[]
for (i,edge) in enumerate(edges(g))
push!(rows, edge.src, edge.dst)
push!(cols, i, i)
end
graph = sparse(rows, cols, ones(Int, length(rows)))
sc_target = 28.0
log2_sizes = fill(1, size(graph, 2))
b = KaHyParBipartite(sc_target=sc_target, imbalances=[0.0:0.02:0.8...])
group1, group2 = bipartite_sc(b, graph, collect(1:size(graph, 1)), log2_sizes)
@test group_sc(graph, group1, log2_sizes) <= sc_target
@test group_sc(graph, group2, log2_sizes) <= sc_target
sc_target = 27.0
group11, group12 = bipartite_sc(b, graph, group1, log2_sizes)
@test group_sc(graph, group11, log2_sizes) <= sc_target
@test group_sc(graph, group12, log2_sizes) <= sc_target
code = random_regular_eincode(220, 3)
res = optimize_kahypar(code,uniformsize(code, 2); max_group_size=50, sc_target=30)
cc = contraction_complexity(res, uniformsize(code, 2))
@test cc.sc <= 30
# contraction test
code = random_regular_eincode(50, 3)
codeg = optimize_kahypar(code, uniformsize(code, 2); max_group_size=10, sc_target=10)
codet = optimize_kahypar(code, uniformsize(code, 2); max_group_size=10, sc_target=10, sub_optimizer = TreeSA())
codek = optimize_greedy(code, uniformsize(code, 2))
cc_kg = contraction_complexity(codeg, uniformsize(code, 2))
cc_kt = contraction_complexity(codet, uniformsize(code, 2))
cc_g = contraction_complexity(codek, uniformsize(code, 2))
@test cc_kg.sc <= 12
@test cc_kt.sc <= 12
@test cc_g.sc <= 12
xs = [[2*randn(2, 2) for i=1:75]..., [randn(2) for i=1:50]...]
resg = decorate(codeg)(xs...)
resk = decorate(codek)(xs...)
@test resg ≈ resk
Random.seed!(2)
code = random_regular_eincode(220, 3)
codeg_auto = optimize_kahypar_auto(code, uniformsize(code, 2), sub_optimizer=GreedyMethod())
codet_auto = optimize_kahypar_auto(code, uniformsize(code, 2), sub_optimizer=TreeSA(ntrials = 1, sc_weight = 0.1))
ccg = contraction_complexity(codeg_auto, uniformsize(code, 2))
@show ccg.sc, ccg.tc
cct = contraction_complexity(codet_auto, uniformsize(code, 2))
@show cct.sc, cct.tc
@test ccg.sc <= 30
@test cct.sc <= 30
Random.seed!(2)
code = random_regular_open_eincode(50, 3, 3)
codeg = optimize_kahypar(code, uniformsize(code, 2); max_group_size=10, sc_target=10)
codet = optimize_kahypar(code, uniformsize(code, 2); max_group_size=10, sc_target=10, sub_optimizer = TreeSA())
codek = optimize_greedy(code, uniformsize(code, 2))
xs = [[2*randn(2, 2) for i=1:75]..., [randn(2) for i=1:50]...]
resg = decorate(codeg)(xs...)
rest = decorate(codet)(xs...)
resk = decorate(codek)(xs...)
@test rest ≈ resk
@test resg ≈ resk
end | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 762 | using OMEinsumContractionOrders
using Test
@testset "Core" begin
include("Core.jl")
end
@testset "greedy" begin
include("greedy.jl")
end
@testset "sa" begin
include("sa.jl")
end
if isdefined(Base, :get_extension)
@testset "kahypar" begin
include("kahypar.jl")
end
end
@testset "treesa" begin
include("treesa.jl")
end
@testset "treewidth" begin
include("treewidth.jl")
end
@testset "simplify" begin
include("simplify.jl")
end
@testset "interfaces" begin
include("interfaces.jl")
end
@testset "json" begin
include("json.jl")
end
# testing the extension `LuxorTensorPlot` for visualization
if isdefined(Base, :get_extension)
@testset "visualization" begin
include("visualization.jl")
end
end | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 2603 | using Test, OMEinsumContractionOrders
using Graphs, Random
using SparseArrays
using OMEinsumContractionOrders: bipartite_sc, adjacency_matrix, SABipartite, group_sc, bipartite_sc, optimize_sa, optimize_greedy
using OMEinsum: decorate
@testset "sa bipartition" begin
Random.seed!(3)
g = random_regular_graph(120, 3)
adj, = adjacency_matrix([(e.src,e.dst) for e in edges(g)])
ws = fill(log2(2), ne(g))
vertices = 1:110
b = SABipartite(βs=0.1:0.2:20.0, niters=1000, ntrials=100, sc_target=40, initializer=:random)
g1, g2 = bipartite_sc(b, adj, vertices, ws)
@test length(g1) + length(g2) == 110
end
@testset "sa" begin
Random.seed!(2)
function random_regular_eincode(n, k)
g = Graphs.random_regular_graph(n, k)
ixs = [[minmax(e.src,e.dst)...] for e in Graphs.edges(g)]
return OMEinsumContractionOrders.EinCode([ixs..., [[i] for i in Graphs.vertices(g)]...], Int[])
end
g = random_regular_graph(220, 3)
rows = Int[]
cols = Int[]
for (i,edge) in enumerate(edges(g))
push!(rows, edge.src, edge.dst)
push!(cols, i, i)
end
graph = sparse(rows, cols, ones(Int, length(rows)))
sc_target = 28.0
log2_sizes = fill(1, size(graph, 2))
βs = 0.01:0.05:10.0
b = SABipartite(βs=βs, niters=1000, ntrials=100, sc_target=28)
group1, group2 = bipartite_sc(b, graph, collect(1:size(graph, 1)), log2_sizes)
@test group_sc(graph, group1, log2_sizes) <= sc_target+2
@test group_sc(graph, group2, log2_sizes) <= sc_target+2
sc_target = 27.0
group11, group12 = bipartite_sc(b, graph, group1, log2_sizes)
@test group_sc(graph, group11, log2_sizes) <= sc_target+2
@test group_sc(graph, group12, log2_sizes) <= sc_target+2
code = random_regular_eincode(220, 3)
res = optimize_sa(code,uniformsize(code, 2); sc_target=30, βs=βs)
cc = contraction_complexity(res, uniformsize(code, 2))
@test cc.sc <= 32
tc1, sc1, rw1 = timespacereadwrite_complexity(res, uniformsize(code, 2))
cc = contraction_complexity(res, uniformsize(code, 2))
@test (tc1, sc1, rw1) == (cc...,)
println(cc)
# contraction test
code = random_regular_eincode(50, 3)
codeg = optimize_sa(code, uniformsize(code, 2); sc_target=12, βs=βs, ntrials=1, initializer=:greedy)
codek = optimize_greedy(code, uniformsize(code, 2))
cc = contraction_complexity(codek, uniformsize(code, 2))
@test cc.sc <= 12
xs = [[2*randn(2, 2) for i=1:75]..., [randn(2) for i=1:50]...]
resg = decorate(codeg)(xs...)
resk = decorate(codek)(xs...)
@test resg ≈ resk
end
| OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 3403 | using OMEinsumContractionOrders
using Test, Random
using OMEinsumContractionOrders: merge_vectors, merge_greedy, optimize_greedy, embed_simplifier
using OMEinsum: decorate, rawcode, @ein_str
import OMEinsum
function apply_simplifier(s, xs)
map(s.operations) do op
return decorate(op)(xs...)
end
end
function have_identity(ne::OMEinsumContractionOrders.NestedEinsum)
if OMEinsumContractionOrders.isleaf(ne)
return false
elseif length(OMEinsumContractionOrders.getixsv(ne.eins)) == 1 && OMEinsumContractionOrders.getixsv(ne.eins)[1] == OMEinsumContractionOrders.getiyv(ne.eins)
return true
else
return any(have_identity, ne.args)
end
end
@testset "simplify vectors" begin
tn = rawcode(ein"ab,bc,cd,de,a,b,c,d,f,f,e->ab")
xs = [randn(fill(2, length(ix))...) for ix in OMEinsumContractionOrders.getixsv(tn)]
simplifier, tn2 = merge_vectors(tn)
@test tn2 == rawcode(ein"ab,bc,cd,de,f->ab")
@test decorate(tn)(xs...) ≈ decorate(tn2)(apply_simplifier(simplifier, xs)...)
tn3 = optimize_greedy(tn2, uniformsize(tn2, 2))
tn4 = embed_simplifier(tn3, simplifier)
@test !have_identity(tn4)
@test decorate(tn)(xs...) ≈ decorate(tn4)(xs...)
tn3 = optimize_greedy(tn2, uniformsize(tn2, 2))
tn4 = embed_simplifier(tn3, simplifier)
@test !have_identity(tn4)
@test decorate(tn)(xs...) ≈ decorate(tn4)(xs...)
end
@testset "simplify greedy" begin
tn = rawcode(ein"ab,bc,cd,de,a,b,c,d,f,f,e->ab")
size_dict = uniformsize(tn, 2)
xs = [randn(fill(2, length(ix))...) for ix in OMEinsumContractionOrders.getixsv(tn)]
simplifier, tn2 = merge_greedy(tn, size_dict)
@test tn2 == rawcode(ein"ab,->ab")
@test decorate(tn)(xs...) ≈ decorate(tn2)(apply_simplifier(simplifier, xs)...)
tn3 = optimize_greedy(tn2, uniformsize(tn2, 2))
tn4 = embed_simplifier(tn3, simplifier)
@test !have_identity(tn4)
@test decorate(tn)(xs...) ≈ decorate(tn4)(xs...)
tn3 = optimize_greedy(tn2, uniformsize(tn2, 2))
tn4 = embed_simplifier(tn3, simplifier)
@test !have_identity(tn4)
@test decorate(tn)(xs...) ≈ decorate(tn4)(xs...)
end
@testset "optimize permute" begin
code = ein"lcij,lcjk->kcli" |> rawcode
@test OMEinsumContractionOrders.optimize_output_permute(OMEinsumContractionOrders.getixsv(code), OMEinsumContractionOrders.getiyv(code)) == collect("iklc")
code = ein"lcij,lcjk,c->kcli" |> rawcode
@test OMEinsumContractionOrders.optimize_output_permute(OMEinsumContractionOrders.getixsv(code), OMEinsumContractionOrders.getiyv(code)) == collect("kcli")
code = ein"lcij,lcjk->" |> rawcode
@test OMEinsumContractionOrders.optimize_output_permute(OMEinsumContractionOrders.getixsv(code), OMEinsumContractionOrders.getiyv(code)) == Char[]
end
@testset "optimize permute" begin
code = ein"(lcij,lcjk),lcik->" |> rawcode
res = OMEinsumContractionOrders.NestedEinsum([OMEinsumContractionOrders.NestedEinsum([OMEinsumContractionOrders.NestedEinsum{Char}(1),
OMEinsumContractionOrders.NestedEinsum{Char}(2)], ein"lcij,lcjk->iklc" |> rawcode), OMEinsumContractionOrders.NestedEinsum{Char}(3)], ein"iklc,lcik->" |> rawcode)
@test optimize_permute(code) == res
code = OMEinsumContractionOrders.SlicedEinsum(['c'], ein"(lcij,lcjk),lcik->" |> rawcode)
@test optimize_permute(code) == OMEinsumContractionOrders.SlicedEinsum(['c'], res)
end | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 11362 | using OMEinsumContractionOrders, Test, Random
using OMEinsumContractionOrders: random_exprtree, ExprTree, ExprInfo,
ruleset, update_tree!, tcscrw, optimize_subtree!, optimize_tree_sa!, labels, tree_timespace_complexity, fast_log2sumexp2,
ExprTree, optimize_greedy, _label_dict, Slicer, optimize_tree
using Graphs
using OMEinsum: decorate
@testset "slicer" begin
log2_sizes = [1, 2,3 ,4.0]
s = Slicer(log2_sizes, 3, [])
push!(s, 1)
@test_throws AssertionError push!(s, 1)
push!(s, 2)
push!(s, 3)
@test_throws AssertionError push!(s, 4)
replace!(s, 1=>4)
@test s.log2_sizes == [1, 0.0, 0.0, 0.0]
@test s.legs == Dict(2=>2.0, 3=>3.0, 4=>4.0)
@test_throws AssertionError replace!(s, 1=>4)
end
@testset "random expr tree" begin
function random_regular_eincode(n, k)
g = Graphs.random_regular_graph(n, k)
ixs = [[minmax(e.src,e.dst)...] for e in Graphs.edges(g)]
return OMEinsumContractionOrders.EinCode([ixs..., [[i] for i in Graphs.vertices(g)]...], Int[])
end
Random.seed!(2)
tree = random_exprtree([[1,2,5], [2,3], [2,4]], [5], 5)
@test tree isa ExprTree
tree2 = random_exprtree(OMEinsumContractionOrders.EinCode([[1,2,5], [2,3], [2,4]], [5]))
@test tree isa ExprTree
code = random_regular_eincode(20, 3)
optcode = optimize_greedy(code, uniformsize(code, 2))
tree3 = ExprTree(optcode)
@test tree isa ExprTree
println(tree)
labelmap = Dict([v=>k for (k,v) in _label_dict(code)])
optcode_reconstruct = OMEinsumContractionOrders.NestedEinsum(tree3, labelmap)
@test optcode == optcode_reconstruct
end
@testset "rules" begin
LeafNode(id, labels) = ExprTree(ExprInfo(labels, id))
t1 = ExprTree(LeafNode(3, [1,2]), ExprTree(LeafNode(1,[2,3]), LeafNode(2,[1,4]), ExprInfo([1,2])), ExprInfo([2]))
t2 = ExprTree(ExprTree(LeafNode(1, [2,3]), LeafNode(2, [1,4]), ExprInfo([1,2,3])), LeafNode(3,[1,2]), ExprInfo([2]))
t3 = ExprTree(LeafNode(1,[2,3]), LeafNode(2, [1,2]), ExprInfo([2]))
t4 = ExprTree(ExprTree(LeafNode(1, [2,3]), LeafNode(2, [1,4]), ExprInfo([1,2])), ExprTree(LeafNode(4,[5,1]), LeafNode(3,[1]), ExprInfo([1])), ExprInfo([2]))
@test ruleset(t1) == 3:4
@test ruleset(t2) == 1:2
@test ruleset(t3) == 1:0
@test ruleset(t4) == 1:4
log2_sizes = ones(5)
_tcsc(t, l) = tcscrw(labels(t.left), labels(t.right), labels(t), l, true)
@test all(_tcsc(t1, log2_sizes) .≈ (2.0, 1.0, log2(10)))
@test all(_tcsc(t2, log2_sizes) .≈ (2.0, 1.0, log2(14)))
@test all(_tcsc(t3, log2_sizes) .≈ (1.0, 1.0, log2(10)))
@test all(_tcsc(t4, log2_sizes) .≈ (2.0, 1.0, log2(8)))
t11 = update_tree!(copy(t1), 3, [2])
@test t11 == ExprTree(LeafNode(1,[2,3]), ExprTree(LeafNode(3,[1,2]), LeafNode(2,[1,4]), ExprInfo([2])), ExprInfo([2]))
t11_ = update_tree!(copy(t1), 4, [1,2])
@test t11_ == ExprTree(LeafNode(2,[1,4]), ExprTree(LeafNode(1,[2,3]), LeafNode(3,[1,2]), ExprInfo([1,2])), ExprInfo([2]))
t22 = update_tree!(copy(t2), 1, [1,2])
@test t22 == ExprTree(ExprTree(LeafNode(1,[2,3]), LeafNode(3,[1,2]), ExprInfo([1,2])), LeafNode(2,[1,4]), ExprInfo([2]))
t22_ = update_tree!(copy(t2), 2, [2])
@test t22_ == ExprTree(ExprTree(LeafNode(3, [1,2]), LeafNode(2,[1,4]), ExprInfo([2])), LeafNode(1,[2,3]), ExprInfo([2]))
t44 = update_tree!(copy(t4), 1, [1,2])
@test t44 == ExprTree(ExprTree(LeafNode(1,[2,3]), ExprTree(LeafNode(4,[5,1]), LeafNode(3,[1]), ExprInfo([1])), ExprInfo([1,2])), LeafNode(2,[1,4]), ExprInfo([2]))
end
@testset "optimization" begin
function random_regular_eincode(n, k)
g = Graphs.random_regular_graph(n, k)
ixs = [[minmax(e.src,e.dst)...] for e in Graphs.edges(g)]
return OMEinsumContractionOrders.EinCode([ixs..., [[i] for i in Graphs.vertices(g)]...], Int[])
end
Random.seed!(2)
n = 40
log2_sizes = rand(n+n÷2) * 2
code = random_regular_eincode(n, 3)
optcode = optimize_greedy(code, uniformsize(code, 2))
println(code)
println(optcode)
tree = ExprTree(optcode)
tc0, sc0, rw0 = tree_timespace_complexity(tree, log2_sizes)
size_dict = Dict([j=>exp2(log2_sizes[j]) for j=1:length(log2_sizes)])
cc0 = contraction_complexity(OMEinsumContractionOrders.NestedEinsum(tree), size_dict)
tc0_, sc0_ = cc0.tc, cc0.sc
@test tc0 ≈ tc0_ && sc0 ≈ sc0_
opt_tree = copy(tree)
optimize_subtree!(opt_tree, 100.0, log2_sizes, 5, 2.0, 1.0)
tc1, sc1, rw0 = tree_timespace_complexity(opt_tree, log2_sizes)
@test sc1 < sc0 || (sc1 == sc0 && tc1 < tc0)
end
@testset "optimize tree sa" begin
function random_regular_eincode(n, k)
g = Graphs.random_regular_graph(n, k)
ixs = [[minmax(e.src,e.dst)...] for e in Graphs.edges(g)]
return OMEinsumContractionOrders.EinCode([ixs..., [[i] for i in Graphs.vertices(g)]...], Int[])
end
Random.seed!(3)
n = 60
ne = n + n÷2
log2_sizes = ones(ne)
code = random_regular_eincode(n, 3)
optcode = optimize_greedy(code, uniformsize(code, 2))
tree = ExprTree(optcode)
tc0, sc0, rw0 = tree_timespace_complexity(tree, log2_sizes)
opttree = copy(tree)
optimize_tree_sa!(opttree, log2_sizes, Slicer(log2_sizes, 0, []); sc_target=sc0-2.0, βs=0.1:0.1:10.0, niters=100, sc_weight=1.0, rw_weight=1.0)
tc1, sc1, rw1 = tree_timespace_complexity(opttree, log2_sizes)
@test sc1 < sc0 || (sc1 == sc0 && tc1 < tc0)
slicer = Slicer(log2_sizes, 5, [])
optimize_tree_sa!(opttree, log2_sizes, slicer; sc_target=sc0-2.0, βs=0.1:0.1:10.0, niters=100, sc_weight=1.0, rw_weight=1.0)
tc2, sc2, rw2 = tree_timespace_complexity(opttree, slicer.log2_sizes)
@test tc2 <= tc1 + 3
@test sc2 <= sc1 + 3
@test length(slicer) == 5
@test all(l->(slicer.log2_sizes[l]==1 && !haskey(slicer.legs, l)) || (slicer.log2_sizes[l]==0 && haskey(slicer.legs, l)), 1:length(log2_sizes))
@test sc1 < sc0 || (sc1 == sc0 && tc1 < tc0)
end
@testset "sa tree" begin
function random_regular_eincode(n, k)
g = Graphs.random_regular_graph(n, k)
ixs = [[minmax(e.src,e.dst)...] for e in Graphs.edges(g)]
return OMEinsumContractionOrders.EinCode([ixs..., [[i] for i in Graphs.vertices(g)]...], Int[])
end
Random.seed!(2)
g = random_regular_graph(220, 3)
code = random_regular_eincode(220, 3)
res = optimize_greedy(code,uniformsize(code, 2))
cc = contraction_complexity(res, uniformsize(code, 2))
tc, sc = cc.tc, cc.sc
@test optimize_tree(res, uniformsize(code, 2); sc_target=32, βs=0.1:0.05:20.0, ntrials=0, niters=10, sc_weight=1.0, rw_weight=1.0) isa OMEinsumContractionOrders.SlicedEinsum
optcode = optimize_tree(res, uniformsize(code, 2); sc_target=32, βs=0.1:0.05:20.0, ntrials=2, niters=10, sc_weight=1.0, rw_weight=1.0)
cc = contraction_complexity(optcode, uniformsize(code, 2))
@test cc.sc <= 32
@test length(optcode.slicing) == 0
# contraction test
code = random_regular_eincode(50, 3)
codek = optimize_greedy(code, uniformsize(code, 2))
codeg = optimize_tree(code, uniformsize(code, 2); initializer=:random)
cc = contraction_complexity(codek, uniformsize(code, 2))
@test cc.sc <= 12
xs = [[2*randn(2, 2) for i=1:75]..., [randn(2) for i=1:50]...]
resg = decorate(codeg)(xs...)
resk = decorate(codek)(xs...)
@test resg ≈ resk
# contraction test
code = random_regular_eincode(50, 3)
codek = optimize_greedy(code, uniformsize(code, 2))
codeg = optimize_tree(codek, uniformsize(code, 2); initializer=:specified)
cc = contraction_complexity(codek, uniformsize(code, 2))
@test cc.sc <= 12
xs = [[2*randn(2, 2) for i=1:75]..., [randn(2) for i=1:50]...]
resg = decorate(codeg)(xs...)
resk = decorate(codek)(xs...)
@test resg ≈ resk
end
@testset "fast log2sumexp2" begin
a, b, c = randn(3)
@test fast_log2sumexp2(a, b) ≈ log2(sum(exp2.([a,b])))
@test fast_log2sumexp2(a, b, c) ≈ log2(sum(exp2.([a,b,c])))
end
@testset "slicing" begin
function random_regular_eincode(n, k)
g = Graphs.random_regular_graph(n, k)
ixs = [[minmax(e.src,e.dst)...] for e in Graphs.edges(g)]
return OMEinsumContractionOrders.EinCode([ixs..., [[i] for i in Graphs.vertices(g)]...], Int[])
end
# contraction test
Random.seed!(2)
code = random_regular_eincode(100, 3)
code0 = optimize_greedy(code, uniformsize(code, 2))
codek = optimize_tree(code0, uniformsize(code, 2); initializer=:specified, nslices=0, niters=5)
codeg = optimize_tree(code0, uniformsize(code, 2); initializer=:specified, nslices=5, niters=5)
cc0 = contraction_complexity(codek, uniformsize(code, 2))
cc = contraction_complexity(codeg, uniformsize(code, 2))
@show cc.tc, cc.sc, cc0.tc, cc0.sc
@test cc.sc <= cc0.sc - 4
xs = [[2*randn(2, 2) for i=1:150]..., [randn(2) for i=1:100]...]
resg = decorate(codeg)(xs...)
resk = decorate(codek)(xs...)
@test resg ≈ resk
# with open edges
Random.seed!(2)
code = OMEinsumContractionOrders.EinCode(random_regular_eincode(100, 3).ixs, [3,81,2])
codek = optimize_tree(code0, uniformsize(codek, 2); initializer=:specified, nslices=0, niters=5)
codeg = optimize_tree(code0, uniformsize(codeg, 2); initializer=:specified, nslices=5, niters=5)
cc0 = contraction_complexity(codek, uniformsize(code, 2))
tc0, sc0 = cc0.tc, cc0.sc
cc = contraction_complexity(codeg, uniformsize(code, 2))
tc, sc = cc.tc, cc.sc
fl = flop(codeg, uniformsize(code, 2))
@test tc ≈ log2(fl)
@show tc, sc, tc0, sc0
@test sc <= sc0 - 4
@test sc <= 17
xs = [[2*randn(2, 2) for i=1:150]..., [randn(2) for i=1:100]...]
resg = decorate(codeg)(xs...)
resk = decorate(codek)(xs...)
@test resg ≈ resk
# slice with fixed slices
Random.seed!(2)
code = OMEinsumContractionOrders.EinCode(random_regular_eincode(20, 3).ixs, [3,10,2])
code0 = optimize_tree(code, uniformsize(code, 2); nslices=5, fixed_slices=[5,3,8,1,2,4,11])
code1 = optimize_tree(code, uniformsize(code, 2); ntrials=1, nslices=5)
code2 = optimize_tree(code, uniformsize(code, 2); ntrials=1, nslices=5, fixed_slices=[5,3])
code3 = optimize_tree(code, uniformsize(code, 2); ntrials=1, nslices=5, fixed_slices=[5,1,4,3,2])
code4 = optimize_tree(code, uniformsize(code, 2); ntrials=1, fixed_slices=[5,1,4,3,2])
xs = [[2*randn(2, 2) for i=1:30]..., [randn(2) for i=1:20]...]
@test length(code0.slicing) == 7 && code0.slicing == [5,3,8,1,2,4,11]
@test length(code2.slicing) == 5 && code2.slicing[1:2] == [5,3]
@test length(code3.slicing) == 5 && code3.slicing == [5,1,4,3,2]
@test length(code4.slicing) == 5 && code4.slicing == [5,1,4,3,2]
@test decorate(code1)(xs...) ≈ decorate(code2)(xs...)
@test decorate(code1)(xs...) ≈ decorate(code3)(xs...)
function random_regular_eincode_char(n, k)
g = Graphs.random_regular_graph(n, k)
ixs = [[minmax('0' + e.src, '0'+e.dst)...] for e in Graphs.edges(g)]
return OMEinsumContractionOrders.EinCode([ixs..., [['0'+i] for i in Graphs.vertices(g)]...], Char[])
end
code = OMEinsumContractionOrders.EinCode(random_regular_eincode_char(20, 3).ixs, ['3','8','2'])
code1 = optimize_tree(code, uniformsize(code, 2); ntrials=1, fixed_slices=['7'])
@test eltype(code1.eins.eins.iy) == Char
end
| OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 2241 | using OMEinsumContractionOrders
using OMEinsumContractionOrders: IncidenceList, optimize_exact_treewidth, getixsv
using OMEinsum: decorate
using Test, Random
@testset "tree width" begin
optimizer = ExactTreewidth()
size_dict = Dict([c=>(1<<i) for (i,c) in enumerate(['a', 'b', 'c', 'd', 'e', 'f'])]...)
# eincode with no open edges
eincode = OMEinsumContractionOrders.EinCode([['a', 'b'], ['a', 'c', 'd'], ['b', 'c', 'e', 'f'], ['e'], ['d', 'f']], Vector{Char}())
tensors = [rand([size_dict[j] for j in ixs]...) for ixs in getixsv(eincode)]
optcode = optimize_exact_treewidth(optimizer, eincode, size_dict)
cc = contraction_complexity(optcode, size_dict)
# test flop
@test cc.tc ≈ log2(flop(optcode, size_dict))
@test (16 <= cc.tc <= log2(exp2(10)+exp2(16)+exp2(15)+exp2(9))) | (cc.tc ≈ log2(exp2(10)+exp2(16)+exp2(15)+exp2(9)))
@test cc.sc == 11
@test decorate(eincode)(tensors...) ≈ decorate(optcode)(tensors...)
# eincode with open edges
eincode = OMEinsumContractionOrders.EinCode([['a', 'b'], ['a', 'c', 'd'], ['b', 'c', 'e', 'f'], ['e'], ['d', 'f']], ['a'])
tensors = [rand([size_dict[j] for j in ixs]...) for ixs in getixsv(eincode)]
optcode = optimize_exact_treewidth(optimizer, eincode, size_dict)
cc = contraction_complexity(optcode, size_dict)
@test cc.sc == 11
@test decorate(eincode)(tensors...) ≈ decorate(optcode)(tensors...)
# disconnect contraction tree
eincode = OMEinsumContractionOrders.EinCode([['a', 'b'], ['a', 'c', 'd'], ['b', 'c', 'e'], ['e'], ['f']], ['a', 'f'])
tensors = [rand([size_dict[j] for j in ixs]...) for ixs in getixsv(eincode)]
optcode = optimize_exact_treewidth(optimizer, eincode, size_dict)
cc = contraction_complexity(optcode, size_dict)
@test cc.sc == 7
@test decorate(eincode)(tensors...) ≈ decorate(optcode)(tensors...)
eincode = OMEinsumContractionOrders.EinCode([['a', 'b'], ['a', 'c', 'd'], ['b', 'c', 'e'], ['e'], ['f'], Char[]], ['a', 'f'])
tensors = tensors ∪ [fill(2.0,())]
optcode = optimize_exact_treewidth(optimizer, eincode, size_dict)
cc = contraction_complexity(optcode, size_dict)
@test decorate(eincode)(tensors...) ≈ decorate(optcode)(tensors...)
end | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | code | 4325 | using OMEinsum
using OMEinsumContractionOrders: ein2hypergraph, ein2elimination
using Test, OMEinsumContractionOrders
# tests before the extension loaded
@testset "luxor tensor plot dependency check" begin
@test_throws ArgumentError begin
eincode = OMEinsumContractionOrders.EinCode([['a', 'b'], ['a', 'c', 'd'], ['b', 'c', 'e', 'f'], ['e'], ['d', 'f']], ['a'])
ein2hypergraph(eincode)
end
@test_throws ArgumentError begin
eincode = OMEinsum.rawcode(ein"((ij, jk), kl), lm -> im")
ein2elimination(eincode)
end
@test_throws ArgumentError begin
eincode = OMEinsumContractionOrders.EinCode([['a', 'b'], ['a', 'c', 'd'], ['b', 'c', 'e', 'f'], ['e'], ['d', 'f']], Vector{Char}())
viz_eins(eincode)
end
@test_throws ArgumentError begin
eincode = OMEinsumContractionOrders.EinCode([['a', 'b'], ['a', 'c', 'd'], ['b', 'c', 'e', 'f'], ['e'], ['d', 'f']], Vector{Char}())
nested_code = optimize_code(eincode, uniformsize(eincode, 2), GreedyMethod())
viz_contraction(nested_code, pathname = "")
end
end
using LuxorGraphPlot
using LuxorGraphPlot.Luxor
@testset "eincode to hypergraph" begin
eincode = OMEinsumContractionOrders.EinCode([['a', 'b'], ['a', 'c', 'd'], ['b', 'c', 'e', 'f'], ['e'], ['d', 'f']], ['a'])
g1 = ein2hypergraph(eincode)
nested_code = optimize_code(eincode, uniformsize(eincode, 2), GreedyMethod())
g2 = ein2hypergraph(nested_code)
sliced_code = optimize_code(eincode, uniformsize(eincode, 2), TreeSA(nslices = 1))
g3 = ein2hypergraph(sliced_code)
@test g1 == g2 == g3
@test size(g1.adjacency_matrix, 1) == 5
@test size(g1.adjacency_matrix, 2) == 6
end
@testset "eincode to elimination order" begin
eincode = OMEinsum.rawcode(ein"((ij, jk), kl), lm -> im")
elimination_order = ein2elimination(eincode)
@test elimination_order == ['j', 'k', 'l']
end
@testset "visualize eincode" begin
eincode = OMEinsumContractionOrders.EinCode([['a', 'b'], ['a', 'c', 'd'], ['b', 'c', 'e', 'f'], ['e'], ['d', 'f']], Vector{Char}())
t = viz_eins(eincode)
@test t isa Luxor.Drawing
nested_code = optimize_code(eincode, uniformsize(eincode, 2), GreedyMethod())
t = viz_eins(nested_code)
@test t isa Luxor.Drawing
sliced_code = optimize_code(eincode, uniformsize(eincode, 2), TreeSA())
t = viz_eins(sliced_code)
@test t isa Luxor.Drawing
open_eincode = OMEinsumContractionOrders.EinCode([['a', 'b'], ['a', 'c', 'd'], ['b', 'c', 'e', 'f'], ['e'], ['d', 'f']], ['a'])
t = viz_eins(open_eincode)
@test t isa Luxor.Drawing
# filename and location specified
eincode = OMEinsumContractionOrders.EinCode([['a', 'b'], ['a', 'c', 'd'], ['b', 'c', 'e', 'f'], ['e'], ['d', 'f']], Vector{Char}())
filename = tempname() * ".png"
viz_eins(eincode; filename, locs=vcat([(randn() * 60, 0.0) for i=1:5], [(randn() * 60, 320.0) for i=1:6]))
@test isfile(filename)
end
@testset "visualize contraction" begin
eincode = OMEinsumContractionOrders.EinCode([['a', 'b'], ['a', 'c', 'd'], ['b', 'c', 'e', 'f'], ['e'], ['d', 'f']], Vector{Char}())
nested_code = optimize_code(eincode, uniformsize(eincode, 2), GreedyMethod())
t_mp4 = viz_contraction(nested_code)
tempmp4 = tempname() * ".mp4"
tempgif = tempname() * ".gif"
t_mp4_2 = viz_contraction(nested_code, filename = tempmp4)
@test t_mp4 isa String
@test t_mp4_2 isa String
t_gif = viz_contraction(nested_code, filename = tempgif)
@test t_gif isa String
@test_throws AssertionError begin
viz_contraction(nested_code, filename = "test.avi")
end
sliced_code = optimize_code(eincode, uniformsize(eincode, 2), TreeSA())
t_mp4 = viz_contraction(sliced_code)
t_mp4_2 = viz_contraction(sliced_code, filename = tempmp4)
@test t_mp4 isa String
@test t_mp4_2 isa String
t_gif = viz_contraction(sliced_code, filename = tempgif)
@test t_gif isa String
sliced_code2 = optimize_code(eincode, uniformsize(eincode, 2), TreeSA(nslices = 1))
t_mp4 = viz_contraction(sliced_code2)
t_mp4_2 = viz_contraction(sliced_code2, filename = tempmp4)
@test t_mp4 isa String
@test t_mp4_2 isa String
t_gif = viz_contraction(sliced_code2, filename = tempgif)
@test t_gif isa String
end | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | docs | 6566 | # OMEinsumContractionOrders
OMEinsumContractionOrders is a Julia package that provides an `optimize_code` function for finding optimal contraction orders for tensor networks. It is designed to work with multiple tensor network packages, such as: [OMEinsum.jl](https://github.com/under-Peter/OMEinsum.jl/issues) package and [ITensorNetworks.jl](https://github.com/mtfishman/ITensorNetworks.jl).
[](https://github.com/TensorBFS/OMEinsumContractionOrders.jl/actions)
[](https://codecov.io/gh/TensorBFS/OMEinsumContractionOrders.jl)
## Installation
To install OMEinsumContractionOrders, please follow these steps:
1. Open Julia's interactive session (known as [REPL](https://docs.julialang.org/en/v1/manual/getting-started/)) by typing `julia` in your terminal.
2. Press the <kbd>]</kbd> key in the REPL to enter the package mode.
3. Type `add OMEinsumContractionOrders` to install the stable release of the package.
4. (Optional) If you want to use the `KaHyParBipartite` optimizer, which is based on the KaHyPar library, type `add KaHyPar`. Note that this step is optional because some users may have issues with the binary dependencies of KaHyPar (please check issues: [this](https://github.com/kahypar/KaHyPar.jl/issues/12) and [this](https://github.com/kahypar/KaHyPar.jl/issues/19)).
## Example 1: Use it directly
The contraction order optimizer is implemented in the `optimize_code` function. It takes three arguments: `code`, `size`, and `optimizer`. The `code` argument is the [einsum notation](https://numpy.org/doc/stable/reference/generated/numpy.einsum.html) to be optimized. The `size` argument is the size of the variables in the einsum notation. The `optimizer` argument is the optimizer to be used. The `optimize_code` function returns the optimized contraction order. One can use `contraction_complexity` function to get the time, space and rewrite complexity of returned contraction order.
```julia
julia> using OMEinsumContractionOrders, Graphs, KaHyPar
julia> function random_regular_eincode(n, k; optimize=nothing)
g = Graphs.random_regular_graph(n, k)
ixs = [[minmax(e.src,e.dst)...] for e in Graphs.edges(g)]
return EinCode([ixs..., [[i] for i in Graphs.vertices(g)]...], Int[])
end
julia> code = random_regular_eincode(200, 3);
julia> optcode_tree = optimize_code(code, uniformsize(code, 2),
TreeSA(sc_target=28, βs=0.1:0.1:10, ntrials=2, niters=100, sc_weight=3.0));
julia> optcode_tree_with_slice = optimize_code(code, uniformsize(code, 2),
TreeSA(sc_target=28, βs=0.1:0.1:10, ntrials=2, niters=100, sc_weight=3.0, nslices=5));
julia> optcode_kahypar = optimize_code(code, uniformsize(code, 2),
KaHyParBipartite(sc_target=30, max_group_size=50));
julia> optcode_sa = optimize_code(code, uniformsize(code, 2),
SABipartite(sc_target=30, max_group_size=50));
julia> contraction_complexity(code, uniformsize(code, 2))
Time complexity: 2^200.0
Space complexity: 2^0.0
Read-write complexity: 2^10.644757592516257
julia> contraction_complexity(optcode_kahypar, uniformsize(code, 2))
Time complexity: 2^39.5938886486877
Space complexity: 2^28.0
Read-write complexity: 2^30.39890775966298
julia> contraction_complexity(optcode_sa, uniformsize(code, 2))
Time complexity: 2^41.17129641027078
Space complexity: 2^29.0
Read-write complexity: 2^31.493976190321106
julia> contraction_complexity(optcode_tree, uniformsize(code, 2))
Time complexity: 2^35.06468305863757
Space complexity: 2^28.0
Read-write complexity: 2^30.351552349259258
julia> contraction_complexity(optcode_tree_with_slice, uniformsize(code, 2))
Time complexity: 2^33.70760100663681
Space complexity: 2^24.0
Read-write complexity: 2^32.17575935629581
```
## Example 2: Use it in `OMEinsum`
`OMEinsumContractionOrders` is shipped with [`OMEinsum`](https://github.com/under-Peter/OMEinsum.jl) package. You can use it to optimize the contraction order of an `OMEinsum` expression.
```julia
julia> using OMEinsum
julia> code = ein"ij, jk, kl, il->"
ij, jk, kl, il ->
julia> optimized_code = optimize_code(code, uniformsize(code, 2), TreeSA())
SlicedEinsum{Char, NestedEinsum{DynamicEinCode{Char}}}(Char[], ki, ki ->
├─ jk, ij -> ki
│ ├─ jk
│ └─ ij
└─ kl, il -> ki
├─ kl
└─ il
)
```
## Extensions
### LuxorTensorPlot
`LuxorTensorPlot` is an extension of the `OMEinsumContractionOrders` package that provides a visualization of the contraction order. It is designed to work with the `OMEinsumContractionOrders` package. To use `LuxorTensorPlot`, please follow these steps:
```julia
pkg> add OMEinsumContractionOrders, LuxorGraphPlot
julia> using OMEinsumContractionOrders, LuxorGraphPlot
```
and then the extension will be loaded automatically.
The extension provides the following to function, `viz_eins` and `viz_contraction`, where the former will plot the tensor network as a graph, and the latter will generate a video or gif of the contraction process.
Here is an example:
```julia
julia> using OMEinsumContractionOrders, LuxorGraphPlot
julia> eincode = OMEinsumContractionOrders.EinCode([['a', 'b'], ['a', 'c', 'd'], ['b', 'c', 'e', 'f'], ['e'], ['d', 'f']], ['a'])
ab, acd, bcef, e, df -> a
julia> viz_eins(eincode, filename = "eins.png")
julia> nested_eins = optimize_code(eincode, uniformsize(eincode, 2), GreedyMethod())
ab, ab -> a
├─ ab
└─ acf, bcf -> ab
├─ acd, df -> acf
│ ├─ acd
│ └─ df
└─ bcef, e -> bcf
├─ bcef
└─ e
julia> viz_contraction(nested_code)
[ Info: Generating frames, 7 frames in total
[ Info: Creating video at: /var/folders/3y/xl2h1bxj4ql27p01nl5hrrnc0000gn/T/jl_SiSvrH/contraction.mp4
"/var/folders/3y/xl2h1bxj4ql27p01nl5hrrnc0000gn/T/jl_SiSvrH/contraction.mp4"
```
The resulting image and video will be saved in the current working directory, and the image is shown below:
<div style="text-align:center">
<img src="examples/eins.png" alt="Image" width="40%" />
</div>
The large white nodes represent the tensors, and the small colored nodes represent the indices, red for closed indices and green for open indices.
## References
If you find this package useful in your research, please cite the *relevant* papers in [CITATION.bib](CITATION.bib).
## Multi-GPU computation
Please check this Gist:
https://gist.github.com/GiggleLiu/d5b66c9883f0c5df41a440589983ab99
## Authors
OMEinsumContractionOrders was developed by Jin-Guo Liu and Pan Zhang. | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.9.2 | 4eb42d7509a236272be29a88f4668d9b3da2fdce | docs | 164 | # Examples
1. [slicing_multigpu.jl](slicing_multigpu.jl) - Multi-GPU simulation of a quantum circuit (parallel in slices).
```bash
$ julia slicing_multigpu.jl
``` | OMEinsumContractionOrders | https://github.com/TensorBFS/OMEinsumContractionOrders.jl.git |
|
[
"MIT"
] | 0.3.3 | 490baeab9c00ca78309fdc7ae4958c2331230ffe | code | 6016 | module KittyTerminalImages
using Base64: base64encode
using Rsvg
using Cairo: FORMAT_ARGB32, CairoImageSurface, CairoContext
import Cairo
using Base.Multimedia: xdisplayable
using ImageCore: RGBA, channelview
using CodecZlib: ZlibCompressor, ZlibCompressorStream
using Requires: @require
using Interpolations: interpolate, BSpline, Linear
using PNGFiles
import Base: display
export pushKittyDisplay!, forceKittyDisplay!, set_kitty_config!, get_kitty_config, popKittyDisplay!
struct KittyDisplay <: AbstractDisplay end
include("configuration.jl")
include("images.jl")
function __init__()
# TODO verify that we are actually using kitty
pushKittyDisplay!()
@require Compose="a81c6b42-2e10-5240-aca2-a61377ecd94b" include("integration/Compose.jl")
end
function draw_temp_file(img)
# TODO ensure that there is no racing condition with these tempfiles
path, io = mktemp()
PNGFiles.save(io, img)
close(io)
payload = transcode(UInt8, base64encode(path))
write_kitty_image_escape_sequence(stdout, payload, f=100, t='t', X=1, Y=1, a='T')
end
function draw_direct(img)
# TODO this adds some unnecessary channels for alpha and colors that are not always necessary
# TODO might be easier to write to a png then we have some compression, then we can also maybe remove zlib
img_rgba = permutedims(channelview(RGBA.(img)), (1, 3, 2))
img_encoded = base64encode(ZlibCompressorStream(IOBuffer(vec(reinterpret(UInt8, img_rgba)))))
(_, width, height) = size(img_rgba)
buff = IOBuffer()
partitions = Iterators.partition(transcode(UInt8, img_encoded), 4096)
for (i, payload) in enumerate(partitions)
m = (i == length(partitions)) ? 0 : 1 # 0 if this is the last data chunk
if i == 1
write_kitty_image_escape_sequence(buff, payload, f=32, s=width, v=height, X=1, Y=1, a='T', o='z', m=m)
else
write_kitty_image_escape_sequence(buff, payload, m=m)
end
end
write(stdout, take!(buff))
return
end
# allows to define custom behaviour for special cases of show within this package
show_custom(io::IO, m::MIME, x) = show(io, m , x)
# values for control data: https://sw.kovidgoyal.net/kitty/graphics-protocol.html#control-data-reference
function write_kitty_image_escape_sequence(io::IO, payload::AbstractVector{UInt8}; controll_data...)
cmd_prefix = "\033_G"
first_iteration = true
for (key, value) in controll_data
if first_iteration
first_iteration = false
else
cmd_prefix *= ','
end
cmd_prefix *= string(key)
cmd_prefix *= '='
cmd_prefix*= string(value)
end
cmd_prefix *= ';'
cmd_postfix = "\033\\"
cmd = [transcode(UInt8, cmd_prefix); payload; transcode(UInt8, cmd_postfix)]
write(io, cmd)
return
end
# This is basically just what is defined in Base.Multimedia.display(x)
# but tries to display with KittyDisplay before trying the rest of
# the stack.
function _display(@nospecialize x)
displays = Base.Multimedia.displays
for d in reverse(vcat(displays, KittyDisplay()) )
if xdisplayable(d, x)
try
display(d, x)
return
catch e
if !(isa(e, MethodError) && e.f in (display, show))
rethrow()
end
end
end
end
throw(MethodError(display, (x,)))
end
function forceKittyDisplay!()
@eval display(@nospecialize x) = _display(x)
return
end
function pushKittyDisplay!()
d = Base.Multimedia.displays
if !isempty(d) && !isa(d[end], KittyDisplay)
Base.Multimedia.pushdisplay(KittyDisplay())
end
return
end
function popKittyDisplay!()
d = Base.Multimedia.displays
if length(d) > 1 && isa(d[end], KittyDisplay)
Base.Multimedia.popdisplay()
end
return
end
# Supported mime types, they are tried in order that they appear of the list returned
# svg should be preferred png so that we can apply scaling to a vector graphics instead of
# pixels if both formats are supported but because of a bug, (https://github.com/simonschoelly/KittyTerminalImages.jl/issues/4),
# that has not been solved yet, some svg's are not rendered correctly
function kitty_mime_types()
if get_kitty_config(:prefer_png_to_svg)
[MIME"image/png"(), MIME"image/svg+xml"()]
else
[MIME"image/svg+xml"(), MIME"image/png"()]
end
end
function display(d::KittyDisplay, x)
for m in kitty_mime_types()
if showable(m, x)
display(d, m, x)
return
end
end
throw(MethodError(display, (x,)))
end
function display(d::KittyDisplay,
m::MIME"image/png", x; scale=get_kitty_config(:scale, 1.0))
buff = IOBuffer()
show_custom(buff, m, x)
seekstart(buff) # we need to reset the IOBuffer to it's start
img = PNGFiles.load(buff)
img = imresize(img; ratio=scale)
if get_kitty_config(:transfer_mode) == :direct
draw_direct(img)
else
draw_temp_file(img)
end
return
end
function display(d::KittyDisplay,
m::MIME"image/svg+xml", x; scale=get_kitty_config(:scale, 1.0))
# Write x to a cairo buffer a and the use the png display method
buff = IOBuffer()
show_custom(buff, m, x)
svg_data = String(take!(buff))
handle = Rsvg.handle_new_from_data(svg_data)
dims = Rsvg.handle_get_dimensions(handle)
width = round(Int, dims.width * scale)
height = round(Int, dims.height * scale)
surface = CairoImageSurface(width, height, FORMAT_ARGB32)
context = CairoContext(surface)
Cairo.scale(context, scale, scale)
Rsvg.handle_render_cairo(context, handle)
# Rsvg.handle_free(handle) # this leads to error messages
display(d, MIME"image/png"(), surface; scale=1.0) # scaling already happened to svg
return
end
randkitty() = '\U1f600' + rand([-463, -504, 63, 57, 64, 60, 62, 59, 58, 56, 61, 897, -465])
end # module
| KittyTerminalImages | https://github.com/simonschoelly/KittyTerminalImages.jl.git |
|
[
"MIT"
] | 0.3.3 | 490baeab9c00ca78309fdc7ae4958c2331230ffe | code | 1214 |
# TODO there might be Julia library for handling such configurations
# TODO maybe add possibility to add config values from Environment
# TODO add way to validate value and type of configs
config_values = Dict{Symbol, Any}()
supported_config_keys = [:scale, :transfer_mode, :prefer_png_to_svg]
function set_kitty_config!(key, value)
if key ∉ supported_config_keys
throw(DomainError(key, "$key is not a valid config key. Valid keys are $supported_config_keys"))
end
# TODO this could be handled more general
if key == :transfer_mode
if value ∉ (:direct, :temp_file)
throw(DomainError(value, "value for key :transfer_mode must be either :direct or :temp_file"))
end
end
if key == :prefer_png_to_svg
if value ∉ (true, false)
throw(DomainError(value, "value for key :prefer_png_to_svg must be either true or false"))
end
end
config_values[key] = value
return nothing
end
get_kitty_config(key::Symbol) = config_values[key]
get_kitty_config(key::Symbol, default) = get(config_values, key, default)
# insert some default
set_kitty_config!(:transfer_mode, :direct)
set_kitty_config!(:prefer_png_to_svg, true)
| KittyTerminalImages | https://github.com/simonschoelly/KittyTerminalImages.jl.git |
|
[
"MIT"
] | 0.3.3 | 490baeab9c00ca78309fdc7ae4958c2331230ffe | code | 1232 |
"""
imresize(img; ratio)
Return a copy of the image `img` scaled by `ratio`.
In case `ratio` is close to `1`, this function might also return
the original argument.
This is a replacement for a similar function from ImageTransformations.jl,
in order to avoid the loading time of that package. Currently this function might
not be as fancy as the one it replaces.
"""
function imresize(img; ratio::Float64)
height, width = size(img)
new_height = max(1, round(Int, ratio * height))
new_width = max(1, round(Int, ratio * width))
if (new_height == height && new_width == width)
return img
end
new_img = similar(img, new_height, new_width)
# TODO it would be nicer to use Lanczos interpolation
# but that one creates negative colors at the moment
itp = interpolate(img, BSpline(Linear()))
# TODO StepRangeLen does not work for length == 1, maybe we can
# solve this without type instability
xs = (new_width <= 1) ? (1:1) : range(1, width, length=new_width)
ys = (new_height <= 1) ? (1:1) : range(1, height, length=new_height)
for (new_x, x) in enumerate(xs), (new_y, y) in enumerate(ys)
new_img[new_y, new_x] = itp(y, x)
end
return new_img
end
| KittyTerminalImages | https://github.com/simonschoelly/KittyTerminalImages.jl.git |
|
[
"MIT"
] | 0.3.3 | 490baeab9c00ca78309fdc7ae4958c2331230ffe | code | 318 | import .Compose
# This is necessary as without `false` compose tries to draw a png by themself when calling show
# instead of writing it to `io`
show_custom(io::IO, ::MIME"image/png", ctx::Compose.Context) =
Compose.draw(Compose.PNG(io, Compose.default_graphic_width, Compose.default_graphic_height, false), ctx)
| KittyTerminalImages | https://github.com/simonschoelly/KittyTerminalImages.jl.git |
|
[
"MIT"
] | 0.3.3 | 490baeab9c00ca78309fdc7ae4958c2331230ffe | docs | 6838 | # KittyTerminalImages.jl

[](https://juliahub.com/ui/Packages/KittyTerminalImages/gIOCR)
[](https://juliahub.com/ui/Packages/KittyTerminalImages/gIOCR)
## Description
A package that allows Julia to display images in the kitty terminal editor.
## Screenshots
| | | |
| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| <figure><img src="https://github.com/simonschoelly/KittyTerminalImages.jl/blob/master/screenshots/screenshot-colors.png" alt="Colors.jl" style="zoom:50%;" /><figcaption>Colors.jl</figcaption></figure> | <figure><img src="https://github.com/simonschoelly/KittyTerminalImages.jl/blob/master/screenshots/screenshot-images.png" alt="Images.jl" style="zoom:50%;" /><figcaption>Images.jl</figcaption></figure> | <img src="https://github.com/simonschoelly/KittyTerminalImages.jl/blob/master/screenshots/screenshot-plots.png" alt="Plots.jl" style="zoom:50%;" /><figcaption>Plots.jl</figcaption></figure> |
| <figure><img src="https://github.com/simonschoelly/KittyTerminalImages.jl/blob/master/screenshots/screenshot-luxor.png" alt="Luxor.jl" style="zoom:50%;" /><figcaption>Luxor.jl</figcaption></figure> | <figure><img src="https://github.com/simonschoelly/KittyTerminalImages.jl/blob/master/screenshots/screenshot-juliagraphs.png" alt="JuliaGraphs" style="zoom:50%;" /><figcaption>JuliaGraphs</figcaption></figure> | |
## Installation
KittyTerminalImages.jl can be installed from the package manager by typing the following to your REPL:
```julia
] add KittyTerminalImages
```
In addition you will need install the Kitty terminal emulator. It works on macOS or Linux but (at least currently) not on Windows. You can get it here: https://github.com/kovidgoyal/kitty
## Usage
Simple load KittyTerminalImages in your REPL (assuming that you have opened Julia in kitty)
```julia
using KittyTerminalImages
```
Sometimes when loading another package, such as Plots.jl or Gadfly.jl, they will put their own method for displaying images
in an external program on the display stack, thus overriding the behaviour of KittyTerminalImages. To put KittyTerminalImages back
on top of the stack, execute the following:
```julia
pushKittyDisplay!()
```
To stop using KittyTerminalImages, execute the following:
```julia
popKittyDisplay!()
```
This will remove the Kitty image display from the end
of the display list, if it contains at least one
other element.
You can also override the general display behaviour so KittyTerminalImages is on top of the stack. This is a hack and can have
unwanted side effects. To do that, execute the following:
```julia
forceKittyDisplay!()
```
### Makie
It is possible to use [Makie.jl](https://github.com/JuliaPlots/Makie.jl) together with KittyTerminalImages although there is no interactivity nor any animations.
After loading a Makie backend it might be necessary to run `AbstractPlotting.inline!(true)` so that the plots are shown in the terminal instead
of an external window. In my experience that is necessary for the [GLMakie.jl](https://github.com/JuliaPlots/GLMakie.jl) backend
but not for the [CairoMakie.jl](https://github.com/JuliaPlots/CairoMakie.jl) backend.
### Running scripts
Beware that KittyTerminalImages.jl relies on the `display` function to draw the plots, and this is *not* called automatically
when you are writing a script, unless the plotting command is the very last command in the script. If you want to force the
display of a plot, you can wrap the call with `display`:
```julia
# This is a script named "test.jl"
using Plots
using KittyTerminalImages
println("Here is the first plot:")
# We call "display" on plot's return value
plot(sin, 0:0.1:2π) |> display
println() # Force a newline
# Another plot
println("Here is another plot:")
plot(exp, 0:0.1:1)
plot!(log, 0.1:0.1:1) |> display # Only call "display" on the last "plot!"
println() # Force a newline
println("End of the script")
```
The calls to `display` will make the plot appear when you run the script calling `julia test.jl` from the shell or writing
`include("test.jl")` in the REPL.
## Configuration
### Setting the scale
If the images are too small or too large for your terminal,
you can specify a scale
```julia
set_kitty_config!(:scale, 2.0) # scale all images by a factor 2
```
### Setting preference for PNG or SVG
Certain Julia objects can be drawn as either a PNG or a SVG image. In case both formats are possible,
one can specify that PNG should be preferred by setting the ``:prefer_png_to_svg`` config value:
```julia
set_kitty_config!(:prefer_png_to_svg, true)
```
At the moment this value is set to `true` by default as in some cases the svg renderer creates some incorrect images.
If the `:scale` config is also set, this has the disadvantage that scaled images may appear blurry.
### Setting the transfer mode
To transfer the image from Julia to kitty, one can select between two transfer modes:
* `:direct` (default) -- transfer the imagine with escape sequences
* `:temp_file` -- transfer the imagine by writing it to a temporary file and then transfer only the path of that image
Only `:direct` works if Julia is accessed remotely with SSH but if Julia is on the same machine as kitty then one might switch to `:temp_file` which might be slightly faster.
To switch the mode one can do
```julia
set_kitty_config!(:transfer_mode, :temp_file)
```
## Features
KittyTerminalImages can display all data types than can be converted to either `PNG` or `SVG`.
## Limitations
* There are currently some unresolved issues with some SVG images.
* Does not work with tmux or screen yet.
* Can only display static images, there is no interaction.
* There might be some problems with some Julia packages. If that is the case, feel free to open an issue or submit a PR with a fix.
## TODO list
- [ ] Display LaTeX images.
- [X] Support for SSH.
- [ ] Support for tmux and screen.
- [x] Add an option for setting the image output size.
- [ ] Query for the terminal size and colors.
- [ ] Allow specifying placement of the images - if possible have a mode where the terminal is split into a text and an image section.
- [ ] Figure out if it possible to play animations.
## Similar packages
* [TerminalExtensions.jl](https://github.com/Keno/TerminalExtensions.jl)
* [ImageInTerminal.jl](https://github.com/JuliaImages/ImageInTerminal.jl)
* [SixelTerm.jl](https://github.com/tshort/SixelTerm.jl)
* [TerminalGraphics.jl](https://github.com/m-j-w/TerminalGraphics.jl) (outdated)
| KittyTerminalImages | https://github.com/simonschoelly/KittyTerminalImages.jl.git |
|
[
"MIT"
] | 0.1.1 | 3eb6fd237fd946669acbb95b87eda029cc1d712d | code | 1417 | using BenchmarkTools
using RunningQuantiles
import RollingFunctions
import FastRunningMedian
import SortFilters
using DataStructures
function make_v(p_nan = 0.1, n=10_000)
v = rand(n)
v[rand(n) .< p_nan] .= NaN
v
end
benches = OrderedDict(
"FastRunningMedian" => (v,w) -> FastRunningMedian.running_median(v, w),
"SortFilters" => (v,w) -> SortFilters.movsort(v, w, 0.5),
"RunningQuantiles" => (v,w) -> running_median(v, w),
)
function bench(f,k,n,w)
@info "Benchmarking $k, w=$w"
@benchmark $f(v,$w) setup=(v=make_v(0,$n)) #seconds=0.5
# v = make_v(0,n)
# t = @elapsed f(v,w)
# (; times = [t])
end
n = 100_000
w = [3,11,31,101,301,1001,3001,10_001]
b = OrderedDict(k => [bench(f,k,n,w) for w in w]
for (k,f) in benches)
##
using Plots
using StatsBase
function plot_benchmarks!(x, benchmarks; kwargs...)
times = [b.times for b in benchmarks] ./ 1e9
#errs = @. std(times) / √length(times)
plot!(x, mean.(times);
ylabel = "time [s]",
xlabel = "window length",
ribbon=std.(times),
#yerr=errs,
#yerr=std.(times),
kwargs...)
end
plot(title="running median, n=$n", legend=:topleft)
for (k,v) in b
plot_benchmarks!(w, v; label=k, xscale=:log10, yscale=:log10, lw=3, m=true)
end
#current()
savefig("RunningQuantiles benchmarks.png")
| RunningQuantiles | https://github.com/yha/RunningQuantiles.jl.git |
|
[
"MIT"
] | 0.1.1 | 3eb6fd237fd946669acbb95b87eda029cc1d712d | code | 3601 | module RunningQuantiles
export running_quantile, running_median, SkipNaNs, PropagateNaNs, ErrOnNaN
using SkipLists
using OffsetArrays
using Statistics
include("sortedvector.jl")
abstract type NaNHandling end
struct SkipNaNs <: NaNHandling end
struct PropagateNaNs <: NaNHandling end
struct ErrOnNaN <: NaNHandling end
#_q(v,p) = quantile(v, p; sorted=true)
# using the internal `Statistics._quantile` to skip the check for NaNs in v
_q(v,p) = Statistics._quantile(v, p)
_quantile(::SkipNaNs, p, non_nans, has_nans) = isempty(non_nans) ? NaN : _q(non_nans,p)
_quantile(::PropagateNaNs, p, non_nans, has_nans) = has_nans || isempty(non_nans) ? NaN : _q(non_nans,p)
_quantile(::ErrOnNaN, p, non_nans, has_nans) = has_nans ? _nan_error() : isempty(non_nans) ? NaN : _q(non_nans,p)
_nan_error() = error("NaNs encountered in `running_quantile` with `nan_mode=ErrOnNaN`")
make_window(r::AbstractUnitRange) = r
function make_window(winlen::Int)
winlen > 0 && isodd(winlen) || throw(ArgumentError("Window length must be an odd positive integer."))
-winlen÷2:winlen÷2
end
"""
running_median(v, w, nan_mode=SkipNaNs())
Computes the running median of the vector `v` with window `w`, where `w` is an odd window length, or a range of offsets.
See [`running_quantile`](@ref) for details.
"""
function running_median(v, w, nan_mode=SkipNaNs(); buffer = default_buffer(v,0.5,w))
running_quantile(v, 0.5, w, nan_mode; buffer)
end
"""
result = running_quantile(v, p, w, nan_mode=SkipNaNs())
Computes the running `p`-th quantile of the vector `v` with window `w`, where `w` is an odd window length, or a range of offsets.
Specifically,
- if `w` is a `AbstractUnitRange`, `result[i]` is the `p`-th quantile of `v[(i .+ w) ∩ eachindex(v)]`, where `NaN`s are handled according to `nan_mode`:
- `nan_mode==SkipNaN()`: `NaN` values are ignored; quantile is computed over non-`NaN`s
- `nan_mode==PropagateNaNs()`: the result is `NaN` whenever the input window contains `NaN`
- `nan_mode==ErrOnNaN()`: an error is raise if at least one input window contains `NaN`
- if `w` is an odd integer, a centered window of length `w` is used, namely `-w÷2:w÷2`
"""
function running_quantile(v, p, w, nan_mode=SkipNaNs(); buffer = default_buffer(v,p,w))
_running_quantile(v, p, make_window(w), nan_mode; buffer)
end
function _running_quantile(v, p, r::AbstractUnitRange, nan_mode; buffer)
result = similar(v, float(eltype(v)))
# wrapping this Int in a `Ref` helps the compiler not create a `Box`
# for capturing `nan_count` in the closures below
nan_count = Ref(0)
add!(x) = isnan(x) ? (nan_count[] += 1) : insert!(buffer, x)
remove!(x) = isnan(x) ? (nan_count[] -= 1) : delete!(buffer, x)
put_quantile!(i) = (result[i] = _quantile( nan_mode, p, buffer, nan_count[] > 0 ))
Δ_remove, Δ_add = first(r)-1, last(r)
put_range = eachindex(v)::AbstractUnitRange
add_range = put_range .- Δ_add
remove_range = put_range .- Δ_remove
@assert Δ_remove <= Δ_add
for i in firstindex(v) - max(Δ_add,0) : lastindex(v) - min(Δ_remove,0)
i ∈ add_range && add!( v[ i + Δ_add ] )
i ∈ remove_range && remove!( v[ i + Δ_remove ] )
i ∈ put_range && put_quantile!(i)
end
result
end
function default_buffer(v,p,w)
# Heuristics derived from superficial benchmarking.
# These can probably be improved.
len = length(make_window(w))
if len > 6000
SkipList{eltype(v)}(; node_capacity = round(Int, len/10))
else
SortedVector{eltype(v)}()
end
end
end # module
| RunningQuantiles | https://github.com/yha/RunningQuantiles.jl.git |
|
[
"MIT"
] | 0.1.1 | 3eb6fd237fd946669acbb95b87eda029cc1d712d | code | 536 | struct SortedVector{T} <: AbstractVector{T}
v::Vector{T}
SortedVector{T}() where {T} = new(T[])
SortedVector(v) = new{eltype(v)}(sort(v))
end
Base.insert!(a::SortedVector, x) = insert!(a.v, searchsortedfirst(a.v, x), x)
function Base.delete!(a::SortedVector, x)
r = searchsorted(a.v, x)
isempty(r) || deleteat!(a.v, first(r))
end
Base.getindex(a::SortedVector, i) = a.v[i]
Base.length(a::SortedVector) = length(a.v)
Base.size(a::SortedVector) = (length(a),)
Base.IndexStyle(::Type{SortedVector}) = IndexLinear()
| RunningQuantiles | https://github.com/yha/RunningQuantiles.jl.git |
|
[
"MIT"
] | 0.1.1 | 3eb6fd237fd946669acbb95b87eda029cc1d712d | code | 4263 | using Test
using RunningQuantiles
using ImageFiltering
using NaNMath, NaNStatistics
using Statistics
const ⩦ = isequal
@testset "basic running_median examples" begin
@test running_median(1:5, 0:0) == running_median(1:5, 1) == 1:5
@test running_median(1:5, -1:1) == running_median(1:5, 3) == [1.5, 2.0, 3.0, 4.0, 4.5]
@test running_median(1:5, -2:2) == running_median(1:5, 5) == [2.0, 2.5, 3.0, 3.5, 4.0]
@test running_median(1:5, -3:3) == running_median(1:5, 7) == [2.5, 3.0, 3.0, 3.0, 3.5]
@test running_median(1:5, -4:4) == running_median(1:5, 9) == [3.0, 3.0, 3.0, 3.0, 3.0]
@test running_median(1:5, -500:500) == running_median(1:5, 1001) == [3.0, 3.0, 3.0, 3.0, 3.0]
@test running_quantile(1:5, 1/4, 5) == [1.5, 1.75, 2.0, 2.75, 3.5]
@test running_quantile(1:5, 3/4, 5) == [2.5, 3.25, 4.0, 4.25, 4.5]
@test running_median(1:5, 1:2 ) ⩦ [2.5, 3.5, 4.5, 5.0, NaN]
@test running_median(1:5, -2:-1) ⩦ [NaN, 1.0, 1.5, 2.5, 3.5]
@test running_median(1:5, 1:10) ⩦ [3.5, 4.0, 4.5, 5.0, NaN]
@test running_median(1:5, -10:-1) ⩦ [NaN, 1.0, 1.5, 2.0, 2.5]
@test running_median(1:5, -6:-5) ⩦ fill(NaN,5)
@test running_median(1:5, 5:6 ) ⩦ fill(NaN,5)
@test running_median(1:5, 1:0 ) ⩦ fill(NaN,5)
end
@testset "errors" begin
@testset "even $even $p" for even in 0:2:20, p in 0:0.1:1
@test_throws ArgumentError running_median(1:5, even)
@test_throws ArgumentError running_quantile(1:5, p, even)
end
@testset "negative $negative $p" for negative in [-(10 .^ 2:6); -10:-1], p in 0:0.1:1
@test_throws ArgumentError running_median(1:5, negative)
@test_throws ArgumentError running_quantile(1:5, p, negative)
end
@testset "p = $p ∉ [0,1]" for w in 1:2:9, p in [-100,-1,-1e-6,1+1e-6, 1.1, 10, 100]
@test_throws ArgumentError running_quantile(1:5, p, w)
end
end
@testset "running_median NaN handling" begin
v = [1,2,3,4,NaN,6,7,8,NaN]
@test running_median(v, 0:0) ⩦ running_median(v, 0:0, SkipNaNs()) ⩦ v
@test running_median(v, -1:1) ⩦ running_median(v, -1:1, SkipNaNs()) ⩦ [1.5, 2.0, 3.0, 3.5, 5.0, 6.5, 7.0, 7.5, 8.0]
@test running_median(v, 0:0, PropagateNaNs()) ⩦ v
@test running_median(v, -1:1, PropagateNaNs()) ⩦ [1.5, 2.0, 3.0, NaN, NaN, NaN, 7.0, NaN, NaN]
@test running_median(v, 3:5, PropagateNaNs()) ⩦ [NaN, NaN, 7.0, NaN, NaN, NaN, NaN, NaN, NaN]
@test running_median(v, 9:10, PropagateNaNs()) ⩦ fill(NaN,9)
@test running_median(v, 10:10, ErrOnNaN()) ⩦ fill(NaN,9)
@test_throws Exception running_median(v, 0:0, ErrOnNaN())
end
# Naive implementations of running quantile with NaN handling, with the border behavior of this package
run_q_skip(v,p,w) = mapwindow(v, w; border=Fill(NaN)) do window
non_nans = filter(!isnan, window)
isempty(non_nans) ? NaN : quantile(non_nans, p)
end
function run_q_propagate(v,p,w)
# find a sentinel float value which is not in v, to mark out-of-border elements
sentinel = maximum(filter(isfinite,v)) * 1.01
@assert sentinel ∉ v
p_quantile(window) = any(isnan, window) ? NaN : quantile(filter(!=(sentinel), window), p)
mapwindow(p_quantile, v, w; border=Fill(sentinel))
end
# possible implementation once `ImageFiltering.mapwindow` supports the `NA` border stye:
#run_q_propagate(v,p,w) = mapwindow(w->quantile(w,p), v, w; border=NA())
@testset "ImageFiltering comparisons" begin
v = rand(10_000)
v[rand(length(v)) .< 0.1] .= NaN
v[5000:6000] .= NaN # at least one full window of NaNs in all tested window sizes
@testset "mapwindow w=$w" for w in [1:2:11; 21:10:101; 201:200:1001]
@info "ImageFiltering.mapwindow comparison, w=$w"
@test running_median(v, w) ⩦ run_q_skip(v, 0.5, w)
@test running_median(v, w, PropagateNaNs()) ⩦ run_q_propagate(v, 0.5, w)
@test_throws ErrorException running_median(v, w, ErrOnNaN())
for p in [0.0, 0.1, 0.25, 0.75, 0.9, 1.0]
@test running_quantile(v, p, w) ⩦ run_q_skip(v, p, w)
@test running_quantile(v, p, w, PropagateNaNs()) ⩦ run_q_propagate(v, p, w)
@test_throws ErrorException running_quantile(v, p, w, ErrOnNaN())
end
end
end
# TODO test with OffsetArrays
| RunningQuantiles | https://github.com/yha/RunningQuantiles.jl.git |
|
[
"MIT"
] | 0.1.1 | 3eb6fd237fd946669acbb95b87eda029cc1d712d | docs | 2859 | # RunningQuantiles.jl
*Reasonably fast running quantiles with NaN handling*
## API
```julia
result = running_quantile(v, p, w, nan_mode=SkipNaNs())
```
computes the running `p`-th quantile of `v` with window `w`, where `w` is an odd window length, or a range of offsets.
Specifically,
- if `w` is a `AbstractUnitRange`, `result[i]` is the `p`-th quantile of `v[(i .+ w) ∩ eachindex(v)]`, where `NaN`s are handled according to `nan_mode`:
- `nan_mode==SkipNaN()`: `NaN` values are ignored; quantile is computed over non-`NaN`s
- `nan_mode==PropagateNaNs()`: the result is `NaN` whenever the input window contains `NaN`
- `nan_mode==ErrOnNaN()`: an error is raise if at least one input window contains `NaN`
- if `w` is an odd integer, a centered window of length `w` is used, namely `-w÷2:w÷2`
```julia
running_median(v, w, nan_mode=SkipNaNs())
```
computes the running median, i.e. 1/2-th quantile, as above.
## Alternatives and benchmakrs
These two packages also implement running quantiles/medians, but do not handle `NaN`s (output is garbage when `NaN`s are present):
- [SortFilters.jl](https://github.com/sairus7/SortFilters.jl) is faster for small window sizes.
- [FastRunningMedian.jl](https://github.com/Firionus/FastRunningMedian.jl) is faster for all window size, but only supports median, rather than arbitrary quantiles. It also offers more options for handling of edges.
These package handle the edges and the correspondence of input to output indices differently; please refer to their respective documentation for details.
The most versatile alternative, in terms of options for edge padding and handling of `NaN` values, is probably [ImageFiltering.mapwindow](https://github.com/JuliaImages/ImageFiltering.jl). But it is not specialized for quantiles, and is therefore a *much* slower option.
Benchmarks for running median on a random vector of length `100_000`:

Shaded areas indicate standard deviation. The input vector has no `NaN`s. Performance of this package in the presence of `NaN`s is generally faster, roughly proportional to the number of non-`NaN`s (the other two packages do not handle `NaN` values correctly).
## Examples
```julia
julia> v = [1:3; fill(NaN,3); 1:5]
11-element Vector{Float64}:
1.0
2.0
3.0
NaN
NaN
NaN
1.0
2.0
3.0
4.0
5.0
julia> running_median(v, 3)
11-element Vector{Float64}:
1.5
2.0
2.5
3.0
NaN
1.0
1.5
2.0
3.0
4.0
4.5
julia> running_median(v, 3, PropagateNaNs())
11-element Vector{Float64}:
1.5
2.0
NaN
NaN
NaN
NaN
NaN
2.0
3.0
4.0
4.5
julia> running_median(v, -3:5) # specifying a non-centered window
11-element Vector{Float64}:
2.0
1.5
2.0
2.0
2.5
3.0
3.0
3.0
3.0
3.0
3.5
```
| RunningQuantiles | https://github.com/yha/RunningQuantiles.jl.git |
|
[
"MIT"
] | 0.5.0 | 761bfa6b913c8813712f8a214d28f31bc1d77cb9 | code | 667 | """
$(DocStringExtensions.README)
"""
module LDPCStorage
using DocStringExtensions
include("utils.jl")
include("alist.jl")
export save_to_alist, print_alist, load_alist
include("cscmat.jl") # this format is deprecated in favour of csc.json
# export save_to_cscmat, load_cscmat, load_matrix_from_qc_cscmat_file, CSCMAT_FORMAT_VERSION
include("cscjson.jl")
export print_bincscjson, save_to_bincscjson
export print_qccscjson, save_to_qccscjson
export load_ldpc_from_json, CSCJSON_FORMAT_VERSION
# This format stores the LDPC code as static data in a c++ header file.
include("cpp_header_based.jl")
export print_cpp_header
export print_cpp_header_QC
end # module
| LDPCStorage | https://github.com/XQP-Munich/LDPCStorage.jl.git |
|
[
"MIT"
] | 0.5.0 | 761bfa6b913c8813712f8a214d28f31bc1d77cb9 | code | 7838 | using SparseArrays
using LinearAlgebra
using StatsBase: countmap
struct InconsistentAlistFileError <: Exception
msg::Any
end
function Base.showerror(io::IO, e::InconsistentAlistFileError)
print(io, "InconsistentAlistFileError: ", e.msg)
end
"""
$(SIGNATURES)
Load an LDPC matrix from a text file in alist format. Returns a SparseMatrixCSC{Int8}.
By default, issues a warning if file extension is not ".alist". (Change `warn_unexpected_file_extension` to disable.)
The alist format is redundant (the two halves of a file specify the same information, once by-row and once by-column).
This function only uses the first half. (Change `check_redundant` to also parse and verify the second half.)
For definition of alist format, see http://www.inference.org.uk/mackay/codes/alist.html
"""
function load_alist(file_path::AbstractString; check_redundant=false, warn_unexpected_file_extension=true)
if warn_unexpected_file_extension && file_extension(file_path) != ".alist"
@warn "load_alist called on file with extension '$(file_extension(file_path))', expected '.alist'"
end
local nVN, nCN
local dmax_VN, dmax_CN
local var_node_degs, check_node_degs
local remaining_lines
try
open(file_path, "r") do file
nVN, nCN = space_sep_ints(readline(file)) # sparse matrix has size (nCN, nVN)
dmax_VN, dmax_CN = space_sep_ints(readline(file))
var_node_degs = space_sep_ints(readline(file))
check_node_degs = space_sep_ints(readline(file))
remaining_lines = readlines(file)
end
catch e
throw(InconsistentAlistFileError("Failed to parse '$(abspath(file_path))' as alist file. Reason:\n$e"))
end
# ignore empty lines. Allows, e.g., trailing newlines.
# The alist-files which this library writes do not include a trailing newline.
filter!(remaining_lines) do s
!isnothing(findfirst(r"\S+", s)) # r"\S+" means "at least one non-whitespace character"
end
if length(remaining_lines) != nVN + nCN
throw(InconsistentAlistFileError("Number of lines in $file_path is inconcistent with stated matrix size."))
end
if dmax_CN != maximum(check_node_degs)
throw(InconsistentAlistFileError("Alist file $file_path claims: max. CN degree=$dmax_CN but contents give $(maximum(check_node_degs))."))
end
if dmax_VN != maximum(var_node_degs)
throw(InconsistentAlistFileError("Alist file $file_path claims: max. VN degree=$dmax_CN but contents give $(maximum(var_node_degs))."))
end
# fill the matrix using coordinate format (COO)
I = Int[]; sizehint!(I, nCN ÷ 100) # assume sparsity of 1% to minimize re-allocations
J = Int[]; sizehint!(J, nVN ÷ 100) # assume sparsity of 1% to minimize re-allocations
for col_ind in 1:nVN
rows = space_sep_ints(remaining_lines[col_ind])
if check_redundant && length(rows) != var_node_degs[col_ind]
throw(InconsistentAlistFileError("Variable node degree in $file_path inconcistent with below data for VN $col_ind."))
end
for row_ind in rows
# achieves `H[row_ind, col_ind] = 1`
push!(I, row_ind)
push!(J, col_ind)
end
end
H = sparse(I, J, one(Int8)) # has size (nCN, nVN)
# the second half of the alist file is redundant. Check that it is consistent.
if check_redundant
entry_counter = 0
for row_ind in 1:nCN
cols = space_sep_ints(remaining_lines[nVN + row_ind])
check_node_degree = length(cols)
if check_node_degree != check_node_degs[row_ind]
throw(InconsistentAlistFileError("Check node degree in $file_path inconcistent with below data for CN $row_ind."))
end
entry_counter += check_node_degree
for col_ind in cols
if H[row_ind, col_ind] != 1
throw(InconsistentAlistFileError("VN and CN specifications in $file_path disagree on matrix entry ($row_ind, $col_ind)."))
end
end
end
if entry_counter != sum(H)
throw(InconsistentAlistFileError("VN and CN specification in $file_path are inconsistent."))
end
end
return H
end
"""
$(SIGNATURES)
Save LDPC matrix to file in alist format.
It is assumed that the matrix only contains zeros and ones. Otherwise, behavior is undefined.
For details about the Alist format, see:
https://aff3ct.readthedocs.io/en/latest/user/simulation/parameters/codec/ldpc/decoder.html#dec-h-path-image-required-argument
http://www.inference.org.uk/mackay/codes/alist.html
"""
function save_to_alist(out_file_path::String, matrix::AbstractMatrix{Int8})
open(out_file_path, "w+") do file
print_alist(file, matrix)
end
return nothing
end
"""
$(SIGNATURES)
Save LDPC matrix to file in alist format.
It is assumed that the matrix only contains zeros and ones. Otherwise, behavior is undefined.
For details about the Alist format, see:
https://aff3ct.readthedocs.io/en/latest/user/simulation/parameters/codec/ldpc/decoder.html#dec-h-path-image-required-argument
http://www.inference.org.uk/mackay/codes/alist.html
"""
function print_alist(io::IO, matrix::AbstractMatrix{Int8})
(the_M, the_N) = size(matrix)
check_node_degrees, variable_node_degrees = get_node_degrees(matrix)
# write data as specified by the alist format
lines = String[]
# -- Part 1 --
# 'the_N' is the total number of variable nodes and 'the_M' is the total number of check nodes
push!(lines, "$the_N $the_M")
# 'dmax_VN' is the highest variable node degree and 'dmax_CN' is the highest check node degree
push!(lines, "$(maximum(variable_node_degrees)) $(maximum(check_node_degrees))")
# list of the degrees for each variable node
push!(lines, join(["$deg" for deg in variable_node_degrees], " "))
# list of the degrees for each check node
push!(lines, join(["$deg" for deg in check_node_degrees], " "))
# -- Part 2 --
# each following line describes the check nodes connected to a variable node, the first
# check node index is '1' (i.e., alist format uses 1-based indexing)
# variable node '1'
"""
Get indices of elements equal to one in a matrix.
Returns `Vector{String}`, one string with indices for each row of the matrix.
"""
function get_node_indices(matrix::AbstractArray{Int8,2})
degrees = [findall(row .== 1) for row in eachrow(matrix)]
return [join(string.(index_list), " ") for index_list in degrees]
end
append!(lines, get_node_indices(transpose(matrix)))
# -- Part 3 --
# each following line describes the variables nodes connected to a check node, the first
# variable node index is '1' (i.e., alist format uses 1-based indexing)
# check node '1'
append!(lines, get_node_indices(matrix))
for line in lines
println(io, line)
end
return nothing
end
function get_node_degrees(matrix::AbstractMatrix{Int8})
check_node_degrees = [sum(row) for row in eachrow(matrix)]
variable_node_degrees = [sum(row) for row in eachcol(matrix)]
return check_node_degrees, variable_node_degrees
end
"""Faster version operating on sparse arrays. Assumes all non-zero values are 1!!"""
function get_node_degrees(H_::AbstractSparseMatrix{Int8})
H = dropzeros(H_)
I, J, _ = findnz(H) # assumes all non-zero values are 1!
row_counts = countmap(I)
col_counts = countmap(J)
check_node_degrees = zeros(Int, size(H, 1))
var_node_degrees = zeros(Int, size(H, 2))
check_node_degrees[collect(keys(row_counts))] .= values(row_counts)
var_node_degrees[collect(keys(col_counts))] .= values(col_counts)
return check_node_degrees, var_node_degrees
end
| LDPCStorage | https://github.com/XQP-Munich/LDPCStorage.jl.git |
|
[
"MIT"
] | 0.5.0 | 761bfa6b913c8813712f8a214d28f31bc1d77cb9 | code | 5887 | # This script generates a `.hpp` file (C++ header) containing
# an LDPC code stored in compressed sparse column (CSC) format.
# See command line help for how to use it.
using SparseArrays
using LinearAlgebra
using Pkg
# TODO!!!! USE `pkgversion(m::Module)` IN JULIA 1.9
const cpp_file_description = """
// This file was automatically generated using LDPCStorage.jl (https://github.com/XQP-Munich/LDPCStorage.jl).
// A sparse LDPC matrix (containing only zeros and ones) is saved in compressed sparse column (CSC) format.
// Since the matrix (and LDPC code) is known at compile time, there is no need to save it separately in a file.
// This significantly blows up the executable size (the memory would still have to be used when saving the matrix).
"""
function smallest_cpp_type(x::Real)
if x isa Integer
bits_needed = ceil(Int, log2(x))
if bits_needed <= 8
return "std::uint8_t"
elseif bits_needed <= 16
return "std::uint16_t"
elseif bits_needed <= 32
return "std::uint32_t"
elseif bits_needed <= 64
return "std::uint64_t"
else
throw(ArgumentError("Value $x does not fit a standard-supported C++ fixed width integer type."))
end
else
return "double"
end
end
"""
$(SIGNATURES)
Output C++ header storing the sparse binary (containing only zeros and ones) matrix H
in compressed sparse column (CSC) format.
Note the conversion from Julia's one-based indices to zero-based indices in C++ (also within CSC format)!
"""
function print_cpp_header(
io::IO,
H::SparseMatrixCSC{Int8}
;
namespace_name::AbstractString = "AutogenLDPC",
)
H = dropzeros(H) # remove stored zeros!
_, _, values = findnz(H)
all(values .== 1) || throw(ArgumentError("Expected matrix containing only zeros and ones."))
num_nonzero = length(values)
colptr_cpp_type = smallest_cpp_type(num_nonzero)
row_idx_cpp_type = smallest_cpp_type(size(H, 1))
print(io, cpp_file_description)
println(io, """
#include <cstdint>
#include <array>
namespace $namespace_name {
constexpr inline std::size_t M = $(size(H, 1)); // number of matrix rows
constexpr inline std::size_t N = $(size(H, 2)); // number of matrix columns
constexpr inline std::size_t num_nz = $num_nonzero; // number of stored entries
constexpr inline std::array<$colptr_cpp_type, N + 1> colptr = {""")
for (i, idx) in enumerate(H.colptr)
print(io, "0x$(string(idx - 1, base=16))") # Convert index to base zero
if i != length(H.colptr)
print(io, ",")
end
if mod(i, 100) == 0
println(io, "") # for formatting.
end
end
println(io, "\n};\n")
println(io, "// ------------------------------------------------------- \n")
println(io, "constexpr inline std::array<$row_idx_cpp_type, num_nz> row_idx = {")
for (i, idx) in enumerate(H.rowval)
print(io, "0x$(string(idx - 1, base=16))") # Convert index to base zero
if i != length(H.rowval)
print(io, ",")
end
if mod(i, 100) == 0
println(io, "") # for formatting.
end
end
println(io, "\n};\n\n")
println(io, "} // namespace $namespace_name")
end
"""
$(SIGNATURES)
Output C++ header storing the quasi-cyclic exponents of an LDPC matrix in compressed sparse column (CSC) format.
This implies three arrays, which are called `colptr`, `row_idx` and `values`.
The expansion factor must also be given (it is simply stored as a variable in the header).
Note the conversion from Julia's one-based indices to zero-based indices in C++ (also within CSC format)!
"""
function print_cpp_header_QC(
io::IO,
H::SparseMatrixCSC
;
expansion_factor::Integer,
namespace_name::AbstractString = "AutogenLDPC_QC",
)
H = dropzeros(H) # remove stored zeros!
_, _, values = findnz(H)
num_nonzero = length(values)
colptr_cpp_type = smallest_cpp_type(num_nonzero)
row_idx_cpp_type = smallest_cpp_type(size(H, 1))
values_cpp_type = smallest_cpp_type(maximum(values))
print(io, cpp_file_description)
println(io, """
#include <cstdint>
#include <array>
namespace $namespace_name {
constexpr inline std::size_t M = $(size(H, 1));
constexpr inline std::size_t N = $(size(H, 2));
constexpr inline std::size_t num_nz = $num_nonzero;
constexpr inline std::size_t expansion_factor = $expansion_factor;
constexpr inline std::array<$colptr_cpp_type, N + 1> colptr = {""")
for (i, idx) in enumerate(H.colptr)
print(io, "0x$(string(idx - 1, base=16))") # Convert index to base zero
if i != length(H.colptr)
print(io, ",")
end
if mod(i, 100) == 0
println(io, "") # for formatting.
end
end
println(io, "\n};\n")
println(io, "// ------------------------------------------------------- \n")
println(io, "constexpr inline std::array<$row_idx_cpp_type, num_nz> row_idx = {")
for (i, idx) in enumerate(H.rowval)
print(io, "0x$(string(idx - 1, base=16))") # Convert index to base zero
if i != length(H.rowval)
print(io, ",")
end
if mod(i, 100) == 0
println(io, "") # for formatting.
end
end
println(io, "\n};\n\n")
println(io, "// ------------------------------------------------------- \n")
println(io, "constexpr inline std::array<$values_cpp_type, num_nz> values = {")
for (i, v) in enumerate(H.nzval)
print(io, string(v))
if i != length(H.rowval)
print(io, ",")
end
if mod(i, 100) == 0
println(io, "") # for formatting.
end
end
println(io, "\n};\n\n")
println(io, "} // namespace $namespace_name")
end
| LDPCStorage | https://github.com/XQP-Munich/LDPCStorage.jl.git |
|
[
"MIT"
] | 0.5.0 | 761bfa6b913c8813712f8a214d28f31bc1d77cb9 | code | 6938 | using SparseArrays
using LinearAlgebra
using JSON
const CSCJSON_FORMAT_VERSION = v"0.3.3" # track version of our custom compressed sparse storage json file format.
const format_if_nnz_values_omitted = :BINCSCJSON
const format_if_nnz_values_stored = :COMPRESSED_SPARSE_COLUMN
const description = "Compressed sparse column storage of a matrix. The format defines a sparse matrix using arrays "*
"'column pointers' (json key `colptr`), 'row indices' (key `rowval`) and 'stored entries of the matrix' (key `nzval`). "*
"If the `format` is $format_if_nnz_values_omitted, the `nzval` array is omitted and all non-zero entries of the matrix are assumed to be '1'."*
"If `format` is $format_if_nnz_values_stored, `nzval` is included."
get_metadata() = Dict(
[
:julia_package_version => string(pkgversion(LDPCStorage))
:julia_package_url => "https://github.com/XQP-Munich/LDPCStorage.jl"
]
)
struct InconsistentBINCSCError <: Exception
msg::Any
end
function Base.showerror(io::IO, e::InconsistentBINCSCError)
print(io, "InconsistentBINCSCError: ", e.msg)
end
"""
$(SIGNATURES)
Helper method to use with files. See `print_bincscjson` for main interface.
Writes the two arrays `colptr` and `rowval` defining compressed sparse column (CSC) storage of a the into a json file.
Errors unless sparse matrix only contains ones and zeros.
The third array of CSC format, i.e., the nonzero entries, is not needed, since the matrix is assumed to only contain ones and zeros.
"""
function save_to_bincscjson(
destination_file_path::String, mat::SparseMatrixCSC,
;
varargs...,
)
expected_extension = ".bincsc.json"
if !endswith(destination_file_path, expected_extension)
@warn "Expected extension '$expected_extension' when writing to '$(destination_file_path)')"
end
open(destination_file_path, "w+") do file
print_bincscjson(file, mat; varargs...)
end
return nothing
end
"""
$(SIGNATURES)
Writes the two arrays `colptr` and `rowval` defining compressed sparse column (CSC) storage of a the into a json file.
Errors unless sparse matrix only contains ones and zeros.
The third array of CSC format, i.e., the nonzero entries, is not needed, since the matrix is assumed to only contain ones and zeros.
"""
function print_bincscjson(
io::IO, mat::SparseMatrixCSC
;
comments::AbstractString="",
)
all(x->x==1, mat.nzval) || throw(ArgumentError(
"The input matrix has nonzero entries besides 1. Note: the matrix should have no stored zeros."))
data = Dict(
:CSCJSON_FORMAT_VERSION => string(CSCJSON_FORMAT_VERSION),
:description => description*"\n\nThis file stores a sparse binary matrix in compressed sparse column (CSC) format.",
:comments => comments,
:format => format_if_nnz_values_omitted, # this function does not store nonzero values.
:n_rows => mat.m,
:n_columns => mat.n,
:n_stored_entries => nnz(mat),
:colptr => mat.colptr .- 1,
:rowval => mat.rowval .- 1,
)
try
data[:metadata] = get_metadata()
catch e
@warn "Generating metadata failed. Including default. Error:\n $e"
data[:metadata] = "Metadata generation failed."
end
JSON.print(io, data)
return nothing
end
"""
$(SIGNATURES)
Helper method to use with files. See `print_qccscjson` for main interface.
write the three arrays defining compressed sparse column (CSC) storage of a matrix into a file.
This is used to store the exponents of a quasi-cyclic LDPC matrix.
The QC expansion factor must be specified.
"""
function save_to_qccscjson(
destination_file_path::String, mat::SparseMatrixCSC
;
varargs...
)
expected_extension = ".qccsc.json"
if !endswith(destination_file_path, expected_extension)
@warn "Expected extension '$expected_extension' when writing to '$(destination_file_path)')"
end
open(destination_file_path, "w+") do file
print_qccscjson(file, mat; varargs...)
end
return nothing
end
"""
$(SIGNATURES)
write the three arrays defining compressed sparse column (CSC) storage of a matrix into a file.
This is used to store the exponents of a quasi-cyclic LDPC matrix.
The matrix is assumed to contain quasi-cyclic exponents of an LDPC matrix.
The QC expansion factor must be specified.
"""
function print_qccscjson(
io::IO, mat::SparseMatrixCSC,
;
qc_expansion_factor::Integer,
comments::AbstractString="",
)
data = Dict(
:CSCJSON_FORMAT_VERSION => string(CSCJSON_FORMAT_VERSION),
:description => description*"\n\nThis file stores the quasi-cyclic exponents of a low density parity check (LDPC) code in compressed sparse column (CSC) format.",
:comments => comments,
:format => format_if_nnz_values_stored, # this function does store nonzero values.
:n_rows => mat.m,
:n_columns => mat.n,
:n_stored_entries => nnz(mat),
:qc_expansion_factor => qc_expansion_factor,
:colptr => mat.colptr .- 1,
:rowval => mat.rowval .- 1,
:nzval => mat.nzval,
)
try
data[:metadata] = get_metadata()
catch e
@warn "Generating metadata failed. Including default. Error:\n $e"
data[:metadata] = "Metadata generation failed."
end
JSON.print(io, data)
return nothing
end
"""
$(SIGNATURES)
Loads LDPC matrix from a json file containing compressed sparse column (CSC) storage for either of
- `qccscjson` (CSC of quasi-cyclic exponents) format
- `bincscjson` (CSC of sparse binary matrix) format
Use option to expand quasi-cyclic exponents and get a sparse binary matrix.
"""
function load_ldpc_from_json(file_path::AbstractString; expand_qc_exponents_to_binary=false)
data = JSON.parsefile(file_path)
if VersionNumber(data["CSCJSON_FORMAT_VERSION"]) != CSCJSON_FORMAT_VERSION
@warn "File $file_path
uses format version $(data["CSCJSON_FORMAT_VERSION"])) while library uses format version $CSCJSON_FORMAT_VERSION. Possibly incompatible."
end
if data["format"] == string(format_if_nnz_values_omitted)
return SparseMatrixCSC(data["n_rows"], data["n_columns"], data["colptr"] .+1, data["rowval"] .+1, ones(Int8, data["n_stored_entries"]))
elseif data["format"] == string(format_if_nnz_values_stored)
Hqc = SparseMatrixCSC(data["n_rows"], data["n_columns"], data["colptr"] .+1, data["rowval"] .+1, Array{Int}(data["nzval"]))
if expand_qc_exponents_to_binary
return Hqc_to_pcm(Hqc, data["qc_expansion_factor"])
else
return Hqc
end
else
throw(InconsistentBINCSCError("File $file_path specifies invalid format `$(data["format"])`."))
end
end
function get_qc_expansion_factor(file_path::AbstractString)
data = JSON.parsefile(file_path)
return Int(data["qc_expansion_factor"])
end
| LDPCStorage | https://github.com/XQP-Munich/LDPCStorage.jl.git |
|
[
"MIT"
] | 0.5.0 | 761bfa6b913c8813712f8a214d28f31bc1d77cb9 | code | 8007 | using SparseArrays
using LinearAlgebra
CSCMAT_FORMAT_VERSION = v"0.1.0" # track version of our custom CSCMAT file format.
"""
Note:THIS FORMATT IS DEPRECATED! USE THE JSON BASED FORMATS!
$(SIGNATURES)
write the three arrays defining compressed sparse column (CSC) storage of a matrix into a file.
If `try_hex`, integers in arrays are stored as hexadecimals (without 0x prefix!)
If `allow_omit_entries_if_only_stored_ones`, the `stored values` array is omitted if all stored values compare equal to 1.
additional_header_lines should contain the quasi-cyclic exponent, if any. E.g.:
``additional_header_lines = "QC matrix with expansion factor 32"``
"""
function save_to_cscmat(
mat::SparseMatrixCSC, destination_file_path::String
;
additional_header_lines="",
try_hex::Bool=false,
allow_omit_entries_if_only_stored_ones=false,
)
try
additional_header_lines = "#"*join(split(additional_header_lines, "\n"), "\n# ")
catch
@warn "Failed to process additional header lines. Discarding them."
additional_header_lines = ""
end
if file_extension(destination_file_path) != ".cscmat"
@warn "Writing to sparse column storage file with with extension
'$(file_extension(destination_file_path))', expected '.cscmat'. (path '$(destination_file_path)')"
end
number_map(x::Real) = x
number_map(n::Integer) = string(n, base=try_hex ? 16 : 10)
open(destination_file_path, "w+") do file
println(file, "# $CSCMAT_FORMAT_VERSION")
println(file, "# Compressed sparse column storage of matrix (arrays `colptr`, `rowval`, `stored_values`"
*" as space separated $(try_hex ? "hexadecimal" : "decimal") integers. Stored entries may be zero.).")
println(file, additional_header_lines)
println(file, "# n_rows n_columns n_stored_entries")
println(file, "$(mat.m) $(mat.n) $(nnz(mat))\n")
println(file, join(number_map.(mat.colptr .- 1), " ")) # convert to zero-based indices
println(file, "")
println(file, join(number_map.(mat.rowval .- 1), " ")) # convert to zero-based indices
println(file, "")
if !allow_omit_entries_if_only_stored_ones || any(x->x!=1, mat.nzval)
println(file, join(number_map.(mat.nzval), " "))
end
end
return nothing
end
function read_file_header(file_path; comment_marker = '#')
header = ""
open(file_path, "r") do file
next_line = ""
while true
header *= (next_line*"\n")
next_line = readline(file)
(length(next_line) > 0 && next_line[1] == comment_marker) || break
end
end
return header
end
"""
Note:THIS FORMATT IS DEPRECATED! USE THE JSON BASED FORMATS!
$(SIGNATURES)
read the three arrays defining compressed sparse column (CSC) storage of a matrix into a file.
If `try_hex`, integers are stored as hexadecimals (without 0x prefix!)
"""
function load_cscmat(file_path::String;
print_file_header=false)
expected_file_extension = ".cscmat"
if file_extension(file_path) != expected_file_extension
@warn "load_cscmat called on file '$(file_path)'
with extension '$(file_extension(file_path))', expected $expected_file_extension"
end
file = open(file_path, "r")
header = ""
next_line = ""
while true
header *= (next_line*"\n")
next_line = readline(file)
(length(next_line) > 0 && next_line[1] == '#') || break
end
try
header[1] == "# $CSCMAT_FORMAT_VERSION" && @warn "File written in format $(header[1][2:end]) is being read in format $CSCMAT_FORMAT_VERSION"
catch e
@warn "Failed to verify CSC file format version: $e"
end
if contains(header, "hexadecimal")
base = 16
else
base = 10
end
print_file_header && print(header)
n_rows, n_cols, n_nnz = space_sep_ints(next_line)
_ = readline(file) # empty line
colptr = space_sep_ints(readline(file); base)
_ = readline(file) # empty line
rowval = space_sep_ints(readline(file); base)
_ = readline(file) # empty line
stored_entries = space_sep_ints(readline(file); base)
_ = readline(file) # empty line
remaining_lines = readlines(file)
close(file)
if length(remaining_lines) > 0
@warn "Ignoring additional lines:\n`$remaining_lines`"
end
if length(stored_entries) == 0
stored_entries = ones(Int8, n_nnz)
end
# convert from zero-based to one-based indices.
return SparseMatrixCSC(n_rows, n_cols, colptr .+1, rowval .+1, stored_entries)
end
"""
$(SIGNATURES)
Convert matrix of exponents for QC LDPC matrix to the actual binary LDPC matrix.
The resulting QC-LDPC matrix is a block matrix where each block is either zero,
or a circ-shifted identity matrix of size `expansion_factor`x`expansion_factor`.
Each entry of the matrix Hqc denotes the amount of circular shift in the QC-LDPC matrix.
No entry (or, by default, a stored zero entry) at a given position in Hqc means the associated block is zero.
Use values of `expansion_factor` to denote the non-shifted identity matrix.
"""
function Hqc_to_pcm(
Hqc::SparseMatrixCSC{T,Int} where T <: Integer,
expansion_factor::Integer
;
drop_stored_zeros=true
)
if drop_stored_zeros
Hqc = dropzeros(Hqc)
end
scale_idx(idx::Integer, expansion_factor::Integer) = (idx - 1) * expansion_factor + 1
shifted_identity(N::Integer, shift::Integer, scalar_one=Int8(1)) = circshift(Matrix(scalar_one*I, N, N), (0, shift))
H = spzeros(Int8, size(Hqc, 1) * expansion_factor, size(Hqc, 2) * expansion_factor)
Is, Js, Vs = findnz(Hqc)
for (i, j, v) in zip(Is, Js, Vs)
i_start = scale_idx(i, expansion_factor)
i_end = scale_idx(i+1,expansion_factor) - 1
j_start = scale_idx(j, expansion_factor)
j_end = scale_idx(j+1, expansion_factor) - 1
H[i_start:i_end, j_start:j_end] = shifted_identity(expansion_factor, v, Int8(1))
end
return H
end
"""
Note:THIS FORMATT IS DEPRECATED! USE THE JSON BASED FORMATS!
$(SIGNATURES)
Load exponents for a QC-LDPC matrix from a `.CSCMAT` file and return the binary LDPC matrix.
Not every input `.cscmat` file will give a meaninful result.
The `.cscmat` format allows to store general sparse matrices in text format.
Meanwhile, this function expects that the file stores exponents for a quasi-cyclic LDPC matrix.
The exponent matrix is read and expanded using the expansion factor.
If the expansion factor is not provided, the CSCMAT file must contain a line specifying it.
For example, 'QC matrix with expansion factor 32'
"""
function load_matrix_from_qc_cscmat_file(file_path::AbstractString; expansion_factor=nothing)
if isnothing(expansion_factor)
header = ""
next_line = ""
try
open(file_path, "r") do f
while true
header *= (next_line * "\n")
next_line = readline(f)
(length(next_line) > 0 && next_line[1] == '#') || break
end
end
catch e
throw(InconsistentBINCSCError("Failed to parse header of file '$file_path'. Not a valid qc-cscmat file? Reason:\n$e"))
end
m = match(r"Quasi cyclic exponents for a binary LDPC matrix with expansion factor ([0-9]*)\.", header)
if isnothing(m)
throw(InconsistentBINCSCError("Failed to infer expansion factor! No header line found containing it."))
else
expansion_factor = parse(Int, m.captures[1])
@info "Inferred expansion factor from file header: $expansion_factor"
end
end
local Hqc
try
Hqc = load_cscmat(file_path)
catch e
throw(InconsistentBINCSCError("Failed to parse contents of file '$file_path'. Not a valid qc-cscmat file? Reason:\n$e"))
end
H = Hqc_to_pcm(Hqc, expansion_factor)
return H
end
| LDPCStorage | https://github.com/XQP-Munich/LDPCStorage.jl.git |
|
[
"MIT"
] | 0.5.0 | 761bfa6b913c8813712f8a214d28f31bc1d77cb9 | code | 701 | using SparseArrays, SHA
"""
$(SIGNATURES)
parse a single line of space separated integers
"""
space_sep_ints(s::AbstractString; base=10) = parse.(Int, split(s); base)
function file_extension(path::String)
if contains(path, ".")
return path[findlast(isequal('.'),path):end]
else
return ""
end
end
"""
$(SIGNATURES)
Returns a 256 bit hash of a sparse matrix.
This function should only be used for unit tests!!!
"""
function hash_sparse_matrix(H::SparseMatrixCSC)
ctx = SHA2_256_CTX()
io = IOBuffer(UInt8[], read=true, write=true)
write(io, H.colptr)
write(io, H.rowval)
write(io, H.nzval)
update!(ctx, take!(io))
return digest!(ctx)
end
| LDPCStorage | https://github.com/XQP-Munich/LDPCStorage.jl.git |
|
[
"MIT"
] | 0.5.0 | 761bfa6b913c8813712f8a214d28f31bc1d77cb9 | code | 1695 | using Test, Aqua
using LDPCStorage
@testset "Aqua (Code quality)" begin
Aqua.test_ambiguities([LDPCStorage, Core]) # exclude `Base` in order to not hit unrelated ambiguities from StatsBase.
Aqua.test_unbound_args(LDPCStorage)
Aqua.test_undefined_exports(LDPCStorage)
Aqua.test_project_extras(LDPCStorage)
Aqua.test_stale_deps(LDPCStorage)
# Don't care about compat entries for test-only dependencies.
# Also ignore LinearAlgebra because in current Julia it doesn't "have a version"?!
Aqua.test_deps_compat(LDPCStorage; check_extras = false, ignore=[:LinearAlgebra])
Aqua.test_piracies(LDPCStorage)
Aqua.test_persistent_tasks(LDPCStorage)
end
@testset "LDPCStorage" begin
all_tests = Dict{String,Array{String,1}}(
"File Formats" => [
"test_alist.jl",
"test_cscmat.jl",
"test_cscjson.jl",
"test_cpp_header.jl",
"test_readme_doctest.jl",
],
)
for (testsetname, test_files) in all_tests
@testset "$testsetname" begin
@info "Running tests for `$testsetname`\n$(repeat("=", 60))\n"
for source in test_files
@testset "$source" begin
@info "Running tests in `$source`..."
include(source)
end
end
end
end
# check if any files were missed in the explicit list above
potential_testfiles =
[file for file in readdir(@__DIR__) if match(r"^test_.*\.jl$", file) !== nothing]
tested_files = all_tests |> values |> Iterators.flatten
@test isempty(setdiff(potential_testfiles, tested_files))
end # @testset "LDPCStorage"
| LDPCStorage | https://github.com/XQP-Munich/LDPCStorage.jl.git |
|
[
"MIT"
] | 0.5.0 | 761bfa6b913c8813712f8a214d28f31bc1d77cb9 | code | 690 | using Test, SparseArrays
using LDPCStorage
@testset "save and load alist" begin
# Save and load this arbitrary matrix:
H = Int8[
0 0 1 1 0 0 0 0 1 0 0 1 1 0
1 0 0 1 1 0 0 0 0 0 1 0 0 1
0 1 0 1 0 1 1 0 1 0 0 1 1 0
1 0 0 1 0 0 0 1 0 1 0 1 0 1
]
file_path = tempname() * "_unit_test.alist"
save_to_alist(file_path, H)
H_loaded = load_alist(file_path)
H_checked_redundancy = load_alist(file_path; check_redundant=true)
@test H_checked_redundancy == H_loaded
@test H == H_loaded
# check failures
@test_throws Exception load_alist("$(pkgdir(LDPCStorage))/test/files/test_Hqc.cscmat") # completely invalid file
end
| LDPCStorage | https://github.com/XQP-Munich/LDPCStorage.jl.git |
|
[
"MIT"
] | 0.5.0 | 761bfa6b913c8813712f8a214d28f31bc1d77cb9 | code | 869 |
function main(args)
end
@testset "write c++ header" begin
output_path = tempname()
H = sparse(Int8[
0 0 1 1 0 0 0 0 1 0 0 1 1 0
1 9 0 1 1 0 0 0 0 0 1 0 0 1 # 9 will be stored zero
0 1 0 1 0 1 1 0 1 0 0 1 1 0
1 0 0 1 0 0 0 1 0 1 0 1 0 1
])
H[2,2] = 0 # 9 becomes a stored zero
open(output_path, "w+") do io
print_cpp_header(io, H)
end
# TODO check correctness of written C++ header!
end
@testset "write c++ header for quasi-cyclic exponents of matrix" begin
output_path = tempname()
Hqc = load_ldpc_from_json(qccscjson_exampl_file_path)
expansion_factor = LDPCStorage.get_qc_expansion_factor(qccscjson_exampl_file_path)
open(output_path, "w+") do io
print_cpp_header_QC(io, Hqc; expansion_factor)
end
# TODO check correctness of written C++ header!
end
| LDPCStorage | https://github.com/XQP-Munich/LDPCStorage.jl.git |
|
[
"MIT"
] | 0.5.0 | 761bfa6b913c8813712f8a214d28f31bc1d77cb9 | code | 2926 | using Test, SparseArrays
using LDPCStorage
using LDPCStorage: hash_sparse_matrix
qccscjson_exampl_file_path = "$(pkgdir(LDPCStorage))/test/files/test_Hqc.qccsc.json"
bincscjson_exampl_file_path = "$(pkgdir(LDPCStorage))/test/files/test_H.bincsc.json"
@testset "load_ldpc_from_json" begin
Hqc = load_ldpc_from_json(qccscjson_exampl_file_path)
@test hash_sparse_matrix(Hqc) == UInt8[0x8d, 0xd2, 0x45, 0x0b, 0x9a, 0x5b, 0x8b, 0x4a, 0x6d, 0xab, 0x14, 0x7d,
0x79, 0x72, 0xdd, 0x15, 0x1a, 0x41, 0x4c, 0xa1, 0xc8, 0xd0, 0x23, 0x84, 0x49, 0x17, 0x6a, 0xc8, 0x2b, 0x05, 0x8f, 0xba]
H = load_ldpc_from_json(bincscjson_exampl_file_path)
@test hash_sparse_matrix(H) == UInt8[0x46, 0x63, 0xd9, 0x12, 0x4b, 0xd6, 0xb1, 0xdf, 0xaf, 0xe7, 0x4f, 0x5d,
0x7f, 0x7d, 0x47, 0x5a, 0x4c, 0xd9, 0x6a, 0xf8, 0xae, 0xbb, 0xbd, 0x22, 0xe6, 0xa9, 0x5d, 0x9d, 0xd4, 0x52, 0x33, 0xcb]
end
@testset "save_to_bincscjson and load_ldpc_from_json" begin
H = load_ldpc_from_json(bincscjson_exampl_file_path)
target_file = tempname() * ".bincsc.json"
# TODO think about whether to allow (and just drop) stored zeros
save_to_bincscjson(target_file, H; comments="Some comment")
H_read = load_ldpc_from_json(target_file)
@test H_read == H
end
@testset "save_to_qccscjson and load_ldpc_from_json" begin
Hqc = load_ldpc_from_json(qccscjson_exampl_file_path)
target_file = tempname() * ".qccsc.json"
save_to_qccscjson(target_file, Hqc; comments="Some comment", qc_expansion_factor=32)
H_read = load_ldpc_from_json(target_file)
@test H_read == Hqc
@test LDPCStorage.Hqc_to_pcm(Hqc, 32) == load_ldpc_from_json(
target_file; expand_qc_exponents_to_binary=true)
end
@testset "Hqc_to_pcm" begin
Hqc = sparse([
4 0 1
1 2 -99 # the -99 will be a zero.
])
Hqc[2,3] = 0 # replace -99 by stored zero in sparse matrix
expansion_factor = 4
# each entry of Hqc describes an `expansion_factor x expansion_factor`` sized subblock of H
H_expected = sparse([
1 0 0 0 0 0 0 0 0 1 0 0
0 1 0 0 0 0 0 0 0 0 1 0
0 0 1 0 0 0 0 0 0 0 0 1
0 0 0 1 0 0 0 0 1 0 0 0
0 1 0 0 0 0 1 0 0 0 0 0
0 0 1 0 0 0 0 1 0 0 0 0
0 0 0 1 1 0 0 0 0 0 0 0
1 0 0 0 0 1 0 0 0 0 0 0
])
@test LDPCStorage.Hqc_to_pcm(Hqc, expansion_factor) == H_expected
end
@testset "stored zeros" begin
H = sparse(Int8[
0 0 1 1 0 0 0 0 1 0 0 1 1 0
1 9 0 1 1 0 0 0 0 0 1 0 0 1 # 9 will be stored zero
0 1 0 1 0 1 1 0 1 0 0 1 1 0
1 0 0 1 0 0 0 1 0 1 0 1 0 1
])
H[2,2] = 0 # 9 becomes a stored zero
target_file = tempname() * ".bincsc.json"
@test_throws ArgumentError save_to_bincscjson(target_file, H; comments="Some comment")
end
| LDPCStorage | https://github.com/XQP-Munich/LDPCStorage.jl.git |
|
[
"MIT"
] | 0.5.0 | 761bfa6b913c8813712f8a214d28f31bc1d77cb9 | code | 2945 | using Test, SparseArrays
using LDPCStorage
using LDPCStorage: hash_sparse_matrix, save_to_cscmat, load_cscmat, load_matrix_from_qc_cscmat_file, CSCMAT_FORMAT_VERSION
qc_file_path = "$(pkgdir(LDPCStorage))/test/files/test_Hqc.cscmat"
h_file_path = "$(pkgdir(LDPCStorage))/test/files/test_H.cscmat"
@testset "hash_sparse_matrix" begin
hash = hash_sparse_matrix(sparse([1 2 3 0; 5 0 7 8]))
@test hash == UInt8[0x98, 0x82, 0x9c, 0x17, 0xc6, 0xd2, 0xb4, 0xd4, 0x55, 0x4c, 0x4e, 0x80, 0xd6, 0xea, 0x26,
0xf8, 0x44, 0xb5, 0x72, 0x65, 0xae, 0x93, 0xb8, 0xea, 0x2a, 0x21, 0x92, 0x00, 0x2a, 0x82, 0xcd, 0x93]
end
@testset "load_cscmat" begin
Hqc = load_cscmat(qc_file_path; print_file_header=false)
@test hash_sparse_matrix(Hqc) == UInt8[0x8d, 0xd2, 0x45, 0x0b, 0x9a, 0x5b, 0x8b, 0x4a, 0x6d, 0xab, 0x14, 0x7d,
0x79, 0x72, 0xdd, 0x15, 0x1a, 0x41, 0x4c, 0xa1, 0xc8, 0xd0, 0x23, 0x84, 0x49, 0x17, 0x6a, 0xc8, 0x2b, 0x05, 0x8f, 0xba]
H = load_cscmat(h_file_path; print_file_header=false)
@test hash_sparse_matrix(H) == UInt8[0x46, 0x63, 0xd9, 0x12, 0x4b, 0xd6, 0xb1, 0xdf, 0xaf, 0xe7, 0x4f, 0x5d,
0x7f, 0x7d, 0x47, 0x5a, 0x4c, 0xd9, 0x6a, 0xf8, 0xae, 0xbb, 0xbd, 0x22, 0xe6, 0xa9, 0x5d, 0x9d, 0xd4, 0x52, 0x33, 0xcb]
end
@testset "save_to_cscmat and load_cscmat" begin
Hqc = load_cscmat(qc_file_path; print_file_header=false)
lifting_factor = 32
H = load_cscmat(h_file_path; print_file_header=false)
for allow_omit_entries_if_only_stored_ones in [false, true]
for try_hex in [false, true]
for additional_header_lines in ["", "QC matrix with expansion factor 32", "QC matrix\nwith expansion factor\n\n# 32"]
target_file = tempname() * ".cscmat"
@show allow_omit_entries_if_only_stored_ones
@show try_hex
save_to_cscmat(
H, target_file;
allow_omit_entries_if_only_stored_ones, try_hex, additional_header_lines)
H_read = load_cscmat(target_file; print_file_header=false)
@test H_read == H
save_to_cscmat(
Hqc, target_file;
allow_omit_entries_if_only_stored_ones, try_hex, additional_header_lines)
H_read = load_cscmat(target_file; print_file_header=true)
@test H_read == Hqc
println("\n")
end
end
end
# test failures
@test_throws LDPCStorage.InconsistentBINCSCError load_matrix_from_qc_cscmat_file("$(pkgdir(LDPCStorage))/test/files/test_H.bincsc.json"; expansion_factor=32)
end
@testset "load_matrix_from_qc_cscmat_file" begin
H_32 = load_matrix_from_qc_cscmat_file(qc_file_path; expansion_factor=32)
H_auto = load_matrix_from_qc_cscmat_file(qc_file_path)
H_true = load_cscmat(h_file_path; print_file_header=false)
@test H_true == H_32
@test H_true == H_auto
end
| LDPCStorage | https://github.com/XQP-Munich/LDPCStorage.jl.git |
|
[
"MIT"
] | 0.5.0 | 761bfa6b913c8813712f8a214d28f31bc1d77cb9 | code | 1769 | const ALL_CODEBLOCKS_IN_README = [
raw"""
```julia
julia> ]
(v1.9) pkg> add LDPCStorage
```
""",
raw"""
```julia
using SparseArrays
using LDPCStorage
H = sparse(Int8[
0 0 1 1 0 0 0 0 1 0 0 1 1 0
1 0 0 1 1 0 0 0 0 0 1 0 0 1
0 1 0 1 0 1 1 0 1 0 0 1 1 0
1 0 0 1 0 0 0 1 0 1 0 1 0 1
])
save_to_alist("./ldpc.alist", H)
H_alist = load_alist("./ldpc.alist")
H == H_alist || warn("Failure")
save_to_bincscjson("./ldpc.bincsc.json", H)
H_csc = load_ldpc_from_json("./ldpc.bincsc.json")
H == H_csc || warn("Failure")
open("./autogen_ldpc.hpp", "w+") do io
print_cpp_header(io, H)
end
```
""",
]
@testset "README only contains code blocks mentioned here" begin
# if this test fails, enter all codeblocks
readme_path = "$(pkgdir(LDPCStorage))/README.md"
readme_contents = read(readme_path, String)
for (i, code_block) in enumerate(ALL_CODEBLOCKS_IN_README)
# check that above code is contained verbatim in README
@test contains(readme_contents, code_block)
end
# check that README does not contain any other Julia code blocks
@test length(collect(eachmatch(r"```julia", readme_contents))) == length(ALL_CODEBLOCKS_IN_README)
end
@testset "codeblocks copied in README run without errors" begin
for (i, code_block) in enumerate(ALL_CODEBLOCKS_IN_README)
@testset "Codeblock $i" begin
if !contains(code_block, "julia>") # don't process examples that show REPL interaction (cannot be parsed like this)
# remove the ```julia ... ``` ticks and parse code
parsed_code = Meta.parseall(code_block[9:end-4])
# check if it runs without exceptions
eval(parsed_code)
end
end
end
end
| LDPCStorage | https://github.com/XQP-Munich/LDPCStorage.jl.git |
|
[
"MIT"
] | 0.5.0 | 761bfa6b913c8813712f8a214d28f31bc1d77cb9 | docs | 2325 | # LDPCStorage.jl
[](https://github.com/XQP-Munich/LDPCStorage.jl/actions)
[](https://codecov.io/gh/XQP-Munich/LDPCStorage.jl)
[](./LICENSE)
[](https://github.com/JuliaTesting/Aqua.jl)
[](https://doi.org/10.5281/zenodo.5589595)
*Reads and writes file formats for storing sparse matrices containing only zeros and ones.
Intended for use with low density parity check (LDPC) matrices.
Also supports efficient storage for quasi-cyclic LDPC codes.*
## Installation
Run [Julia](https://julialang.org/), enter ] to bring up Julia's package manager, and add the package:
```julia
julia> ]
(v1.9) pkg> add LDPCStorage
```
## Supported File Formats
- `alist` (by David MacKay et al., see http://www.inference.org.uk/mackay/codes/alist.html)
- `cscmat` (our custom format) DEPRECATED
- `bincsc.json` (Based on compressed sparse column (CSC). Valid `json`.)
- `qccsc.json` (Based on compressed sparse column (CSC). Valid `json`. Store exponents of quasi-cyclic LDPC matrices)
- `hpp (C++ header)` CSC of matrix as static data (write-only, reading not supported!)
## How to use
```julia
using SparseArrays
using LDPCStorage
H = sparse(Int8[
0 0 1 1 0 0 0 0 1 0 0 1 1 0
1 0 0 1 1 0 0 0 0 0 1 0 0 1
0 1 0 1 0 1 1 0 1 0 0 1 1 0
1 0 0 1 0 0 0 1 0 1 0 1 0 1
])
save_to_alist("./ldpc.alist", H)
H_alist = load_alist("./ldpc.alist")
H == H_alist || warn("Failure")
save_to_bincscjson("./ldpc.bincsc.json", H)
H_csc = load_ldpc_from_json("./ldpc.bincsc.json")
H == H_csc || warn("Failure")
open("./autogen_ldpc.hpp", "w+") do io
print_cpp_header(io, H)
end
```
There also are methods accepting an `IO` object: `print_alist`, `print_bincscjson`, etc.
Some methods support outputting quasi-cyclic exponents directly, e.g., `print_cpp_header_QC` outputs a C++ header.
## Contributing
Contributions, feature requests and suggestions are welcome. Open an issue or contact us directly.
| LDPCStorage | https://github.com/XQP-Munich/LDPCStorage.jl.git |
|
[
"MIT"
] | 0.3.4 | 7b86a5d4d70a9f5cdf2dacb3cbe6d251d1a61dbe | code | 16855 | module MosaicViews
using PaddedViews
using OffsetArrays
using MappedArrays: of_eltype
using StackViews
export
MosaicView,
mosaicview,
mosaic
"""
MosaicView(A::AbstractArray)
Create a two dimensional "view" of the three or four dimensional
array `A`. The resulting `MosaicView` will display the data in
`A` such that it emulates using `vcat` for all elements in the
third dimension of `A`, and `hcat` for all elements in the fourth
dimension of `A`.
For example, if `size(A)` is `(2,3,4)`, then the resulting
`MosaicView` will have the size `(2*4,3)` which is `(8,3)`.
Alternatively, if `size(A)` is `(2,3,4,5)`, then the resulting
size will be `(2*4,3*5)` which is `(8,15)`.
Another way to think about this is that `MosaicView` creates a
mosaic of all the individual matrices enumerated in the third
(and optionally fourth) dimension of the given 3D or 4D array
`A`. This can be especially useful for creating a single
composite image from a set of equally sized images.
```@jldoctest
julia> using MosaicViews
julia> A = [(k+1)*l-1 for i in 1:2, j in 1:3, k in 1:2, l in 1:2]
2×3×2×2 Array{Int64,4}:
[:, :, 1, 1] =
1 1 1
1 1 1
[:, :, 2, 1] =
2 2 2
2 2 2
[:, :, 1, 2] =
3 3 3
3 3 3
[:, :, 2, 2] =
5 5 5
5 5 5
julia> MosaicView(A)
4×6 MosaicViews.MosaicView{Int64,4,Array{Int64,4}}:
1 1 1 3 3 3
1 1 1 3 3 3
2 2 2 5 5 5
2 2 2 5 5 5
```
"""
struct MosaicView{T,N,A<:AbstractArray{T,N}} <: AbstractArray{T,2}
parent::A
dims::Tuple{Int,Int}
pdims::NTuple{N,Int}
function MosaicView{T,N}(A::AbstractArray{T,N}, dims) where {T,N}
1 <= N <= 4 || throw(ArgumentError("The given array must have dimensionality N=3 or N=4"))
# unless we store the axes in the struct, we can't support offset indices
# N=2 is a special case that we can provide a specialization on `axes`
# but for consistency with cases of other dimensions, disable it as well
require_one_based_indexing(A)
new{T,N,typeof(A)}(A, dims, size(A))
end
end
function MosaicView(A::AbstractArray{T,N}) where {T,N}
# vectors are lifted to 2D matrix, 3d/4d arrays are reshaped to 2D matrix
dims = (size(A,1) * size(A,3), size(A,2) * size(A,4))
MosaicView{T,N}(A, dims)
end
Base.parent(mva::MosaicView) = mva.parent
Base.size(mva::MosaicView) = mva.dims
# fallback for 1d/2d case
@inline Base.getindex(mva::MosaicView, ind::Int...) = mva.parent[ind...]
@inline function Base.getindex(mva::MosaicView{T,3,A}, i::Int, j::Int) where {T,A}
@boundscheck checkbounds(mva, i, j)
pdims = mva.pdims
parent = mva.parent
idx1 = (i-1) % pdims[1] + 1
idx2 = (j-1) % pdims[2] + 1
idx3 = (i-1) ÷ pdims[1] + 1
@inbounds res = parent[idx1, idx2, idx3]
res
end
# FIXME: we need the annotation T because mosaicview + StackView is currently not type stable
@inline function Base.getindex(mva::MosaicView{T,4,A}, i::Int, j::Int)::T where {T,A}
@boundscheck checkbounds(mva, i, j)
pdims = mva.pdims
parent = mva.parent
idx1 = (i-1) % pdims[1] + 1
idx2 = (j-1) % pdims[2] + 1
idx3 = (i-1) ÷ pdims[1] + 1
idx4 = (j-1) ÷ pdims[2] + 1
@inbounds res = parent[idx1, idx2, idx3, idx4]
res
end
"""
mosaicview(A::AbstractArray;
[fillvalue=<zero unit>], [npad=0],
[nrow], [ncol], [rowmajor=false]) -> MosaicView
mosaicview(As::AbstractArray...; kwargs...)
mosaicview(As::Union{Tuple, AbstractVector}; kwargs...)
Create a two dimensional "view" from array `A`.
The resulting [`MosaicView`](@ref) will display all the matrix
slices of the first two dimensions of `A` arranged as a single
large mosaic (in the form of a matrix).
# Arguments
In contrast to using the constructor of [`MosaicView`](@ref)
directly, the function `mosaicview` also allows for a couple of
convenience keywords. A typical use case would be to create an
image mosaic from a set of input images.
- The parameter `fillvalue` defines the value that
that should be used for empty space. This can be padding caused
by `npad`, or empty mosaic tiles in case the number of matrix
slices in `A` is smaller than `nrow*ncol`.
- The parameter `npad` defines the empty padding space between
adjacent mosaic tiles. This can be especially useful if the
individual tiles (i.e. matrix slices in `A`) are images that
should be visually separated by some grid lines.
- The parameters `nrow` and `ncol` can be used to choose the
number of tiles in row and/or column direction the mosaic should
be arranged in. Note that it suffices to specify one of the
two parameters, as the other one can be inferred accordingly.
The default in case none of the two are specified is `nrow = size(A,3)`.
- If `rowmajor` is set to `true`, then the slices will be
arranged left-to-right-top-to-bottom, instead of
top-to-bottom-left-to-right (default). The layout only differs
in non-trivial cases, i.e., when `nrow != 1` and `ncol != 1`.
!!! tip
This function is not type stable and should only be used if
performance is not a priority. To achieve optimized performance,
you need to manually construct a [`MosaicView`](@ref).
# Examples
The simplest usage is to `cat` two arrays of the same dimension.
```julia-repl
julia> A1 = fill(1, 3, 1)
3×1 Array{Int64,2}:
1
1
1
julia> A2 = fill(2, 1, 3)
1×3 Array{Int64,2}:
2 2 2
julia> mosaicview(A1, A2)
6×3 MosaicView{Int64,4, ...}:
0 1 0
0 1 0
0 1 0
0 0 0
2 2 2
0 0 0
julia> mosaicview(A1, A2; center=false)
6×3 MosaicView{Int64,4, ...}:
1 0 0
1 0 0
1 0 0
2 2 2
0 0 0
0 0 0
```
Other keyword arguments can be useful to get a nice looking results.
```julia-repl
julia> using MosaicViews
julia> A = [k for i in 1:2, j in 1:3, k in 1:5]
2×3×5 Array{Int64,3}:
[:, :, 1] =
1 1 1
1 1 1
[:, :, 2] =
2 2 2
2 2 2
[:, :, 3] =
3 3 3
3 3 3
[:, :, 4] =
4 4 4
4 4 4
[:, :, 5] =
5 5 5
5 5 5
julia> mosaicview(A, ncol=2)
6×6 MosaicViews.MosaicView{Int64,4,...}:
1 1 1 4 4 4
1 1 1 4 4 4
2 2 2 5 5 5
2 2 2 5 5 5
3 3 3 0 0 0
3 3 3 0 0 0
julia> mosaicview(A, nrow=2)
4×9 MosaicViews.MosaicView{Int64,4,...}:
1 1 1 3 3 3 5 5 5
1 1 1 3 3 3 5 5 5
2 2 2 4 4 4 0 0 0
2 2 2 4 4 4 0 0 0
julia> mosaicview(A, nrow=2, rowmajor=true)
4×9 MosaicViews.MosaicView{Int64,4,...}:
1 1 1 2 2 2 3 3 3
1 1 1 2 2 2 3 3 3
4 4 4 5 5 5 0 0 0
4 4 4 5 5 5 0 0 0
julia> mosaicview(A, nrow=2, npad=1, rowmajor=true)
5×11 MosaicViews.MosaicView{Int64,4,...}:
1 1 1 0 2 2 2 0 3 3 3
1 1 1 0 2 2 2 0 3 3 3
0 0 0 0 0 0 0 0 0 0 0
4 4 4 0 5 5 5 0 0 0 0
4 4 4 0 5 5 5 0 0 0 0
julia> mosaicview(A, fillvalue=-1, nrow=2, npad=1, rowmajor=true)
5×11 MosaicViews.MosaicView{Int64,4,...}:
1 1 1 -1 2 2 2 -1 3 3 3
1 1 1 -1 2 2 2 -1 3 3 3
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
4 4 4 -1 5 5 5 -1 -1 -1 -1
4 4 4 -1 5 5 5 -1 -1 -1 -1
```
"""
function mosaicview(A::AbstractArray{T,3};
fillvalue = zero(T),
npad = 0,
nrow = -1,
ncol = -1,
rowmajor = false,
kwargs...) where T # delete `kwargs...` when we delete the `center` depwarn
nrow == -1 || nrow > 0 || throw(ArgumentError("The parameter \"nrow\" must be greater than 0"))
ncol == -1 || ncol > 0 || throw(ArgumentError("The parameter \"ncol\" must be greater than 0"))
npad >= 0 || throw(ArgumentError("The parameter \"npad\" must be greater than or equal to 0"))
if !isempty(kwargs)
if haskey(kwargs, :center)
Base.depwarn("use of `center` in `mosaicview` is deprecated; see `mosaic`", :mosaicview)
else
Base.depwarn("passing extraneous keyword arguments $(values(kwargs)) to `mosaicview` is deprecated", :mosaicview)
end
end
ntile = size(A,3)
ntile_ceil = ntile # ntile need not be integer divideable
if nrow == -1 && ncol == -1
# automatically choose nrow to reflect what MosaicView does
nrow = ntile
ncol = 1
elseif nrow == -1
# compute nrow based on ncol
nrow = ceil(Int, ntile / ncol)
ntile_ceil = nrow * ncol
elseif ncol == -1
# compute ncol based on nrow
ncol = ceil(Int, ntile / nrow)
ntile_ceil = nrow * ncol
else
# accept nrow and ncol as is if it covers at least all
# existing tiles
ntile_ceil = nrow * ncol
ntile_ceil < ntile && throw(ArgumentError("The product of the parameters \"ncol\" (value: $ncol) and \"nrow\" (value: $nrow) must be greater than or equal to $ntile"))
end
# we pad size(A,3) to nrow*ncol. we also pad the first two
# dimensions according to npad. think of this as border
# between tiles (useful for images)
pad_dims = (size(A,1) + npad, size(A,2) + npad, ntile_ceil)
A_pad = PaddedView(convert(T, fillvalue), A, pad_dims)
# next we reshape the image such that it reflects the
# specified nrow and ncol
A_new = if !rowmajor
res_dims = (size(A_pad,1), size(A_pad,2), nrow, ncol)
reshape(A_pad, res_dims)
else
# same as above but we additionally permute dimensions
# to mimic row first layout for the tiles. this is useful
# for images since user often reads these from left-to-right
# before top-to-bottom. (note the swap of "ncol" and "nrow")
res_dims = (size(A_pad,1), size(A_pad,2), ncol, nrow)
A_tp = reshape(A_pad, res_dims)
PermutedDimsArray{eltype(A_tp), 4, (1, 2, 4, 3), (1, 2, 4, 3), typeof(A_tp)}(A_tp)
end
# decrease size of the resulting MosaicView by npad to not have
# a border on the right side and bottom side of the final mosaic.
dims = (size(A_new,1) * size(A_new,3) - npad, size(A_new,2) * size(A_new,4) - npad)
MosaicView{T,4}(A_new, dims)
end
function mosaicview(A::AbstractArray{T,N};
nrow = -1,
ncol = -1,
kwargs...) where {T,N}
# if neither nrow nor ncol is provided then automatically choose
# nrow and ncol to reflect what MosaicView does (i.e. use size)
if nrow == -1 && ncol == -1
nrow = size(A, 3)
# ncol = size(A, 4)
end
mosaicview(reshape(A, (size(A,1), size(A,2), :));
nrow=nrow, ncol=ncol, kwargs...)
end
"""
mosaic(A1, A2...; center=true, kwargs...)
mosaic([A1, A2, ...]; center=true, kwargs...)
Create a mosaic out of input arrays `A1`, `A2`, .... `mosaic` is essentially
a more flexible version of `cat` or `hvcat`; like them it makes a copy of
the inputs rather than returning a "view."
If `center` is set to `true`, then the padded arrays will be shifted
to the center; if set to false, they shift to the top-left corner. This
parameter is only useful when arrays are of different sizes.
All the keyword arguments of [`mosaicview`](@ref) are also supported.
"""
@inline mosaic(As::AbstractArray...; kwargs...) = mosaic(As; kwargs...)
function mosaic(As::AbstractVector{<:AbstractArray};
fillvalue=zero(_filltype(As)),
center::Bool=true,
kwargs...)
length(As) == 0 && throw(ArgumentError("The given vector should not be empty"))
nd = ndims(As[1])
all(A->ndims(A)==nd, As) || throw(ArgumentError("All arrays should have the same dimension"))
T = _filltype(As)
fillvalue = convert(T, fillvalue)
mosaicview(_padded_cat(As; center=center, fillvalue=fillvalue, dims=valdim(first(As)));
fillvalue=fillvalue, kwargs...)
end
function mosaic(As::Tuple;
fillvalue=zero(_filltype(As)),
center::Bool=true,
kwargs...)
length(As) == 0 && throw(ArgumentError("The given tuple should not be empty"))
nd = ndims(As[1])
all(A->ndims(A)==nd, As) || throw(ArgumentError("All arrays should have the same dimension"))
T = _filltype(As)
fillvalue = convert(T, fillvalue)
vd = valdim(first(As))
if isconcretetype(eltype(As)) || VERSION < v"1.2.0"
# Base.inferencebarrier requires Julia at least v1.2.0
mosaicview(_padded_cat(As; center=center, fillvalue=fillvalue, dims=vd);
fillvalue=fillvalue, kwargs...)
else
# Reduce latency by despecializing calls with heterogeneous array types
mosaicview(_padded_cat(Base.inferencebarrier(As); center=center, fillvalue=Base.inferencebarrier(fillvalue), dims=Base.inferencebarrier(vd));
fillvalue=fillvalue, kwargs...)
end
end
valdim(A::AbstractArray{T,0}) where T = Val(3)
valdim(A::AbstractVector) = Val(3)
valdim(A::AbstractMatrix) = Val(3)
valdim(A::AbstractArray{T,N}) where {T,N} = Val(N+1)
function _padded_cat(imgs, center::Bool, fillvalue, dims)
@nospecialize # because this is frequently called with heterogeneous inputs, we @nospecialize it
pv(@nospecialize(imgs::AbstractVector{<:AbstractArray})) = PaddedViews.paddedviews_itr(fillvalue, imgs)
pv(@nospecialize(imgs)) = paddedviews(fillvalue, imgs...)
sym_pv(@nospecialize(imgs::AbstractVector{<:AbstractArray})) = PaddedViews.sym_paddedviews_itr(fillvalue, imgs)
sym_pv(@nospecialize(imgs)) = sym_paddedviews(fillvalue, imgs...)
pv_fn = center ? sym_pv : pv
return StackView{_filltype(imgs)}(pv_fn(imgs), dims)
end
# compat: some packages uses this method
_padded_cat(imgs; center::Bool, fillvalue, dims) = _padded_cat(imgs, center, fillvalue, dims)
has_common_axes(@nospecialize(imgs)) = isempty(imgs) || all(isequal(axes(first(imgs))) ∘ axes, imgs)
# This uses Union{} as a sentinel eltype (all other types "beat" it),
# and Bool as a near-neutral fill type.
_filltype(As) = PaddedViews.filltype(Bool, _filltypeT(Union{}, As...))
@inline _filltypeT(::Type{T}, A, tail...) where T = _filltypeT(promote_wrapped_type(T, _gettype(A)), tail...)
_filltypeT(::Type{T}) where T = T
# When the inputs are homogenous we can circumvent varargs despecialization
# This also handles the case of empty `As` but concrete `T`.
function _filltype(As::AbstractVector{A}) where A<:AbstractArray{T} where T
# (!@isdefined(T) || T === Any) && return invoke(_filltype, Tuple{Any}, As)
T === Any && return invoke(_filltype, Tuple{Any}, As)
return PaddedViews.filltype(Bool, T)
end
_gettype(A::AbstractArray{T}) where T = T === Any ? typeof(first(A)) : T
"""
promote_wrapped_type(S, T)
Similar to `promote_type`, except designed to be extensible to cases where you want to promote should occur through a wrapper type.
`promote_wrapped_type` is used by `_filltype` to compute the common element type for handling heterogeneous types when building the mosaic.
It does not have the order-independence of `promote_type`, and you should extend it directly rather than via a `promote_rule`-like mechanism.
# Example
Suppose you have
```
struct MyWrapper{T}
x::T
end
```
and you don't want to define `promote_type(MyWrapper{Int},Float32)` generally as anything other than `Any`,
but for the purpose of building mosaics a `MyWrapper{Float32}` would be a valid common type.
Then you could define
```
MosaicViews.promote_wrapped_type(::Type{MyWrapper{S}}, ::Type{MyWrapper{T}}) where {S,T} = MyWrapper{MosaicViews.promote_wrapped_type(S,T)}
MosaicViews.promote_wrapped_type(::Type{MyWrapper{S}}, ::Type{T}) where {S,T} = MyWrapper{MosaicViews.promote_wrapped_type(S,T)}
MosaicViews.promote_wrapped_type(::Type{S}, ::Type{MyWrapper{T}}) where {S,T} = MosaicViews.promote_wrapped_type(MyWrapper{T}, S)
```
"""
promote_wrapped_type(::Type{S}, ::Type{T}) where {S, T} = promote_type(S, T)
### compat
if VERSION < v"1.2"
require_one_based_indexing(A...) = !Base.has_offset_axes(A...) || throw(ArgumentError("offset arrays are not supported but got an array with index other than 1"))
else
const require_one_based_indexing = Base.require_one_based_indexing
end
### deprecations
@deprecate mosaicview(A1::AbstractArray, A2::AbstractArray; kwargs...) mosaic(A1, A2; kwargs...) # prevent A2 from being interpreted as fillvalue
@deprecate mosaicview(As::AbstractArray...; kwargs...) mosaic(As...; kwargs...)
@deprecate mosaicview(As::AbstractVector{<:AbstractArray};
fillvalue=zero(_filltype(As)),
center::Bool=true,
kwargs...) mosaic(As; fillvalue=fillvalue, center=center, kwargs...)
@deprecate mosaicview(As::Tuple;
fillvalue=zero(_filltype(As)),
center::Bool=true,
kwargs...) mosaic(As; fillvalue=fillvalue, center=center, kwargs...)
end # module
| MosaicViews | https://github.com/JuliaArrays/MosaicViews.jl.git |
|
[
"MIT"
] | 0.3.4 | 7b86a5d4d70a9f5cdf2dacb3cbe6d251d1a61dbe | code | 14261 | using MosaicViews
using Test
using ImageCore, ColorVectorSpace
using OffsetArrays
# Because of the difference in axes types between paddedviews (Base.OneTo) and
# sym_paddedviews (UnitRange), the return type of `_padded_cat` isn't inferrable to a
# concrete type. But it is inferrable to a Union{A,B} where both A and B are concrete.
# While `@inferred(_padded_cat((A, B), ...))` would therefore fail, this is a close substitute.
function _checkinferred_paddedcat(V, A; kwargs...)
vd = MosaicViews.valdim(first(A))
RTs = Base.return_types(MosaicViews._padded_cat, (typeof(A), Bool, eltype(V), typeof(vd)))
@test length(RTs) == 1
RT = RTs[1]
@test isconcretetype(RT) || (isa(RT, Union) && isconcretetype(RT.a) && isconcretetype(RT.b))
return V
end
function checkinferred_mosaic(As::Tuple; kwargs...)
V = mosaic(As; kwargs...)
return _checkinferred_paddedcat(V, As)
end
function checkinferred_mosaic(A...; kwargs...)
V = mosaic(A...; kwargs...)
return _checkinferred_paddedcat(V, A)
end
function checkinferred_mosaic(As::AbstractVector{<:AbstractArray}; kwargs...)
V = mosaic(As; kwargs...)
return _checkinferred_paddedcat(V, (As...,); kwargs...)
end
@testset "MosaicView" begin
@test_throws ArgumentError MosaicView(rand(2,2,2,2,2))
@testset "1D input" begin
A = [1,2,3]
mva = @inferred MosaicView(A)
@test parent(mva) === A
@test typeof(mva) <: MosaicView
@test eltype(mva) == eltype(A)
@test size(mva) == (3, 1) # 1D vector is lifted to 2D matrix
@test axes(mva) == (Base.OneTo(3), Base.OneTo(1))
@test @inferred(getindex(mva, 1, 1)) === 1
@test @inferred(getindex(mva, 2, 1)) === 2
end
@testset "2D input" begin
A = [1 2;3 4]
mva = @inferred MosaicView(A)
@test parent(mva) === A
@test typeof(mva) <: MosaicView
@test eltype(mva) == eltype(A)
@test size(mva) == (2, 2)
@test axes(mva) == (Base.OneTo(2), Base.OneTo(2))
@test @inferred(getindex(mva, 1, 1)) === 1
@test @inferred(getindex(mva, 2, 1)) === 3
end
@testset "3D input" begin
A = zeros(Int,2,2,2)
A[:,:,1] = [1 2; 3 4]
A[:,:,2] = [5 6; 7 8]
mva = @inferred MosaicView(A)
@test parent(mva) === A
@test typeof(mva) <: MosaicView
@test eltype(mva) == eltype(A)
@test size(mva) == (4, 2)
@test @inferred(getindex(mva,1,1)) === 1
@test @inferred(getindex(mva,2,1)) === 3
@test_throws BoundsError mva[0,1]
@test_throws BoundsError mva[1,0]
@test_throws BoundsError mva[1,3]
@test_throws BoundsError mva[5,1]
@test all(mva .== vcat(A[:,:,1],A[:,:,2]))
# singleton dimension doesn't change anything
@test mva == MosaicView(reshape(A,2,2,2,1))
end
@testset "4D input" begin
A = zeros(Int,2,2,1,2)
A[:,:,1,1] = [1 2; 3 4]
A[:,:,1,2] = [5 6; 7 8]
mva = @inferred MosaicView(A)
@test parent(mva) === A
@test typeof(mva) <: MosaicView
@test eltype(mva) == eltype(A)
@test size(mva) == (2, 4)
@test @inferred(getindex(mva,1,1)) === 1
@test @inferred(getindex(mva,2,1)) === 3
@test_throws BoundsError mva[0,1]
@test_throws BoundsError mva[1,0]
@test_throws BoundsError mva[3,1]
@test_throws BoundsError mva[1,5]
@test all(mva .== hcat(A[:,:,1,1],A[:,:,1,2]))
A = zeros(Int,2,2,2,3)
A[:,:,1,1] = [1 2; 3 4]
A[:,:,1,2] = [5 6; 7 8]
A[:,:,1,3] = [9 10; 11 12]
A[:,:,2,1] = [13 14; 15 16]
A[:,:,2,2] = [17 18; 19 20]
A[:,:,2,3] = [21 22; 23 24]
mva = @inferred MosaicView(A)
@test parent(mva) === A
@test typeof(mva) <: MosaicView
@test eltype(mva) == eltype(A)
@test size(mva) == (4, 6)
@test @inferred(getindex(mva,1,1)) === 1
@test @inferred(getindex(mva,2,1)) === 3
@test all(mva .== vcat(hcat(A[:,:,1,1],A[:,:,1,2],A[:,:,1,3]), hcat(A[:,:,2,1],A[:,:,2,2],A[:,:,2,3])))
end
end
@testset "mosaicview" begin
@testset "1D input" begin
A1 = [1, 2, 3]
A2 = [4, 5, 6]
mva = checkinferred_mosaic(A1, A2)
@test typeof(mva) <: MosaicView
@test eltype(mva) == eltype(A1)
@test size(mva) == (6, 1)
@test @inferred(getindex(mva, 1, 1)) == 1
@test checkinferred_mosaic(A1, A2; nrow=1) == [1 4; 2 5; 3 6]
@test mosaic(A1, A2; nrow=2) == reshape([1, 2, 3, 4, 5, 6], (6, 1))
@test mosaic(A1, A2) == mosaic([A1, A2]) == mosaic((A1, A2))
end
@testset "2D input" begin
A = [i*ones(Int, 2, 3) for i in 1:4]
Ao = [i*ones(Int, 0i:1, 0:2) for i in 1:4]
for B in (A, tuple(A...), Ao)
@test_throws ArgumentError mosaic(B, nrow=0)
@test_throws ArgumentError mosaic(B, ncol=0)
@test_throws ArgumentError mosaic(B, nrow=1, ncol=1)
mva = checkinferred_mosaic(B)
@test mosaic(B...) == mva
@test typeof(mva) <: MosaicView
@test eltype(mva) == eltype(eltype(B))
@test size(mva) == (8, 3)
@test @inferred(getindex(mva,3,1)) === 2
@test collect(mva) == [
1 1 1
1 1 1
2 2 2
2 2 2
3 3 3
3 3 3
4 4 4
4 4 4
]
mva = checkinferred_mosaic(B, nrow=2)
@test typeof(mva) <: MosaicView
@test eltype(mva) == eltype(eltype(B))
@test size(mva) == (4, 6)
@test collect(mva) == [
1 1 1 3 3 3
1 1 1 3 3 3
2 2 2 4 4 4
2 2 2 4 4 4
]
end
@test mosaic(A...) == mosaic(A)
@test mosaic(A..., nrow=2) == mosaic(A, nrow=2)
@test mosaic(A..., nrow=2, rowmajor=true) == mosaic(A, nrow=2, rowmajor=true)
A1 = reshape([1 2 3], (1, 3))
A2 = reshape([4;5;6], (3, 1))
@test checkinferred_mosaic([A1, A2]; center=false) == [
1 2 3;
0 0 0;
0 0 0;
4 0 0;
5 0 0;
6 0 0
]
@test mosaic([A1, A2]; center=true) == [
0 0 0;
1 2 3;
0 0 0;
0 4 0;
0 5 0;
0 6 0
]
@test mosaic([A1, A2]) == mosaic([A1, A2]; center=true)
# same size but different axes
A1 = fill(1, 1:2, 1:2)
A2 = fill(2, 2:3, 2:3)
@test collect(checkinferred_mosaic(A1, A2; center=true)) == [
1 1 0;
1 1 0;
0 0 0;
2 2 0;
2 2 0;
0 0 0;
]
@test collect(mosaic(A1, A2; center=false)) == [
1 1 0;
1 1 0;
0 0 0;
0 0 0;
0 2 2;
0 2 2;
]
@test mosaicview(A1) == A1 # a trivial case
end
@testset "3D input" begin
A = [(k+1)*l-1 for i in 1:2, j in 1:3, k in 1:2, l in 1:2]
B = reshape(A, 2, 3, :)
@test_throws ArgumentError mosaicview(B, nrow=0)
@test_throws ArgumentError mosaicview(B, ncol=0)
@test_throws ArgumentError mosaicview(B, nrow=1, ncol=1)
mva = checkinferred_mosaic(B)
@test typeof(mva) <: MosaicView
@test eltype(mva) == eltype(B)
@test size(mva) == (8, 3)
@test @inferred(getindex(mva,3,1)) === 2
@test mva == [
1 1 1
1 1 1
2 2 2
2 2 2
3 3 3
3 3 3
5 5 5
5 5 5
]
mva = checkinferred_mosaic(B, nrow=2)
@test mva == MosaicView(A)
@test typeof(mva) != typeof(MosaicView(A))
@test parent(parent(mva)).data == B
@test typeof(mva) <: MosaicView
@test eltype(mva) == eltype(B)
@test size(mva) == (4, 6)
@test mosaic(B, B) == mosaicview(cat(B, B; dims=4))
@test mosaic(B, B, nrow=2) == mosaicview(cat(B, B; dims=4), nrow=2)
@test mosaic(B, B, nrow=2, rowmajor=true) == mosaicview(cat(B, B; dims=4), nrow=2, rowmajor=true)
end
@testset "4D input" begin
A = [(k+1)*l-1 for i in 1:2, j in 1:3, k in 1:2, l in 1:2]
@test_throws ArgumentError mosaicview(A, nrow=0)
@test_throws ArgumentError mosaicview(A, ncol=0)
@test_throws ArgumentError mosaicview(A, nrow=1, ncol=1)
mva = mosaicview(A)
@test mva == MosaicView(A)
@test typeof(mva) != typeof(MosaicView(A))
@test parent(parent(mva)).data == reshape(A, 2, 3, :)
@test typeof(mva) <: MosaicView
@test eltype(mva) == eltype(A)
@test size(mva) == (4, 6)
mva = mosaicview(A, npad=1)
@test typeof(mva) <: MosaicView
@test eltype(mva) == eltype(A)
@test size(mva) == (5, 7)
@test mva == mosaicview(A, nrow=2, npad=1)
@test mva == mosaicview(A, ncol=2, npad=1)
@test @inferred(getindex(mva,3,1)) === 0
@test @inferred(getindex(mva,2,5)) === 3
@test mva == [
1 1 1 0 3 3 3
1 1 1 0 3 3 3
0 0 0 0 0 0 0
2 2 2 0 5 5 5
2 2 2 0 5 5 5
]
mva = mosaicview(A, ncol=3, npad=1)
@test typeof(mva) <: MosaicView
@test eltype(mva) == eltype(A)
@test size(mva) == (5, 11)
@test mva == [
1 1 1 0 3 3 3 0 0 0 0
1 1 1 0 3 3 3 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0
2 2 2 0 5 5 5 0 0 0 0
2 2 2 0 5 5 5 0 0 0 0
]
mva = mosaicview(A, rowmajor=true, ncol=3, npad=1)
@test typeof(mva) <: MosaicView
@test eltype(mva) == eltype(A)
@test size(mva) == (5, 11)
@test @inferred(getindex(mva,3,1)) === 0
@test @inferred(getindex(mva,2,5)) === 2
@test mva == [
1 1 1 0 2 2 2 0 3 3 3
1 1 1 0 2 2 2 0 3 3 3
0 0 0 0 0 0 0 0 0 0 0
5 5 5 0 0 0 0 0 0 0 0
5 5 5 0 0 0 0 0 0 0 0
]
mva = mosaicview(A, rowmajor=true, ncol=3, npad=2)
@test typeof(mva) <: MosaicView
@test eltype(mva) == eltype(A)
@test size(mva) == (6, 13)
@test mva == [
1 1 1 0 0 2 2 2 0 0 3 3 3
1 1 1 0 0 2 2 2 0 0 3 3 3
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
5 5 5 0 0 0 0 0 0 0 0 0 0
5 5 5 0 0 0 0 0 0 0 0 0 0
]
mva = mosaicview(A, fillvalue=-1.0, rowmajor=true, ncol=3, npad=1)
@test typeof(mva) <: MosaicView
@test eltype(mva) == eltype(A)
@test size(mva) == (5, 11)
@test mva == [
1 1 1 -1 2 2 2 -1 3 3 3
1 1 1 -1 2 2 2 -1 3 3 3
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
5 5 5 -1 -1 -1 -1 -1 -1 -1 -1
5 5 5 -1 -1 -1 -1 -1 -1 -1 -1
]
@test mosaic(A, A) == mosaicview(cat(A, A; dims=5))
@test mosaic(A, A, nrow=2) == mosaicview(cat(A, A; dims=4), nrow=2)
@test mosaic(A, A, nrow=2, rowmajor=true) == mosaicview(cat(A, A; dims=4), nrow=2, rowmajor=true)
end
@testset "Colorant Array" begin
A = rand(RGB{Float32}, 2, 3, 2, 2)
mvaa = mosaicview(A)
@test eltype(mvaa) == eltype(A)
@test mvaa == @inferred(MosaicView(A))
mvaa = mosaicview(A, rowmajor=true, ncol=3)
@test eltype(mvaa) == eltype(A)
@test @inferred(getindex(mvaa, 3, 4)) == RGB(0,0,0)
mvaa = mosaicview(A, fillvalue=colorant"white", rowmajor=true, ncol=3)
@test eltype(mvaa) == eltype(A)
@test @inferred(getindex(mvaa, 3, 4)) == RGB(1,1,1)
# this should work regardless they're of different size and color
@test_nowarn mosaic(rand(RGB{Float32}, 4, 4),
rand(Gray{N0f8}, 5, 5))
end
# all arrays should have the same dimension
@test_throws ArgumentError mosaic(ones(2), ones(1, 2))
@test_throws ArgumentError mosaic((ones(2), ones(1, 2)))
@test_throws ArgumentError mosaic([ones(2), ones(1, 2)])
@testset "filltype" begin
# always a concrete type
A = checkinferred_mosaic(rand(N0f8, 4, 4), rand(Float64, 4, 4), rand(Float32, 4, 4))
@test eltype(A) == Float64
A = mosaic(Any[1 2 3; 4 5 6], rand(Float32, 4, 4))
@test eltype(A) == Float32
A = mosaic(rand(Float32, 4, 4), Any[1 2 3; 4 5 6])
@test eltype(A) == Float32
# FIXME:
# A = checkinferred_mosaic(rand(Float64, 4, 4), Union{Missing, Float32}[1 2 3; 4 5 6])
A = mosaic(rand(Float64, 4, 4), Union{Missing, Float32}[1 2 3; 4 5 6])
@test eltype(A) == Union{Missing, Float64}
end
end
@testset "deprecations" begin
@info "deprecations are expected"
# mosaicview -> mosaic deprecations
A = [(k+1)*l-1 for i in 1:2, j in 1:3, k in 1:2, l in 1:2]
mva_old = mosaicview(A, A, rowmajor=true, ncol=3, npad=1)
mva_new = mosaic(A, A, rowmajor=true, ncol=3, npad=1)
@test mva_old == mva_new
mva_old = mosaicview(A, A, A, rowmajor=true, ncol=3, npad=1)
mva_new = mosaic(A, A, A, rowmajor=true, ncol=3, npad=1)
@test mva_old == mva_new
mva_old = mosaicview([A, A], rowmajor=true, ncol=3, npad=1)
mva_new = mosaic([A, A], rowmajor=true, ncol=3, npad=1)
@test mva_old == mva_new
mva_old = mosaicview((A, A), rowmajor=true, ncol=3, npad=1)
mva_new = mosaic((A, A), rowmajor=true, ncol=3, npad=1)
@test mva_old == mva_new
# center keyword for `mosaicview` (still applies for `mosaic`)
A = [(k+1)*l-1 for i in 1:2, j in 1:3, k in 1:2, l in 1:2]
B = reshape(A, 2, 3, :)
@test mosaicview(B, center=2) == mosaicview(B) # no op
@test mosaicview(A, center=2) == mosaicview(A) # no op
end
| MosaicViews | https://github.com/JuliaArrays/MosaicViews.jl.git |
|
[
"MIT"
] | 0.3.4 | 7b86a5d4d70a9f5cdf2dacb3cbe6d251d1a61dbe | docs | 7582 | # MosaicViews
[![Travis-CI][travis-img]][travis-url]
[![CodeCov][codecov-img]][codecov-url]
[![PkgEval][pkgeval-img]][pkgeval-url]
## Motivations
When visualizing images, it is not uncommon to provide a 2D view of different image sources.
For example, comparing multiple images of different sizes, getting a preview of machine
learning dataset. This package aims to provide easy-to-use tools for such tasks.
## Usage
### Compare two or more images
When comparing and showing multiple images, `cat`/`hcat`/`vcat/hvcat` can be helpful if the image
sizes and element types are the same. But if not, you'll need `mosaic` for this purpose.
```julia
# ImageCore reexports MosaicViews with some glue code for images
julia> using ImageCore, ImageShow, TestImages, ColorVectorSpace
julia> toucan = testimage("toucan") # 150×162 RGBA image
julia> moon = testimage("moon") # 256×256 Gray image
julia> mosaic(toucan, moon; nrow=1)
```

Like `cat`, `mosaic` makes a copy of the inputs.
### Get a preview of dataset
Many datasets in machine learning field are stored as 3D/4D array, where different images are different slices
along the 3rd and 4th dimensions.
`mosaicview` provides a convenient way to visualize a single higher-dimensional array as a 2D grid-of-images.
```julia
julia> using MosaicViews, ImageShow, MLDatasets
julia> A = MNIST.convert2image(MNIST.traintensor(1:9))
28×28×9 Array{Gray{Float64},3}:
[...]
julia> mosaicview(A, fillvalue=.5, nrow=2, npad=1, rowmajor=true)
57×144 MosaicViews.MosaicView{Gray{Float64},4,...}:
[...]
```

Unlike `mosaic`, `mosaicview` does not copy the input--it provides an alternative interpretation of the input data.
Consequently, if you modify pixels of the output of `mosaicview`, those modifications also apply to the parent array `A`.
`mosaicview` is essentially a flexible way of constructing a `MosaicView`; it provides
additional customization options via keyword arguments.
If you do not need the flexibility of `mosaicview`, you can directly call the `MosaicView` constructor.
The remainder of this page illustrates the various options for `mosaic` and `mosaicview` and then covers the low-level `MosaicView` constructor.
### More on the keyword options
`mosaic` and `mosaicview` use almost all the same keyword arguments (all except `center`, which is not relevant for `mosaicview`).
Let's illustrate some of the effects you can achieve.
First, in the simplest case:
```julia
julia> A1 = fill(1, 3, 1)
3×1 Array{Int64,2}:
1
1
1
julia> A2 = fill(2, 1, 3)
1×3 Array{Int64,2}:
2 2 2
# A1 and A2 will be padded to the common size and shifted
# to the center, this is a common operation to visualize
# multiple images
julia> mosaic(A1, A2)
6×3 MosaicView{Int64,4, ...}:
0 1 0
0 1 0
0 1 0
0 0 0
2 2 2
0 0 0
```
If desired, you can disable the automatic centering:
```julia
# disable center shift
julia> mosaic(A1, A2; center=false)
6×3 MosaicView{Int64,4, ...}:
1 0 0
1 0 0
1 0 0
2 2 2
0 0 0
0 0 0
```
You can also control the placement of tiles. Here this is illustrated for `mosaicview`, but
the same options apply for `mosaic`:
```julia
julia> A = [k for i in 1:2, j in 1:3, k in 1:5]
2×3×5 Array{Int64,3}:
[:, :, 1] =
1 1 1
1 1 1
[:, :, 2] =
2 2 2
2 2 2
[:, :, 3] =
3 3 3
3 3 3
[:, :, 4] =
4 4 4
4 4 4
[:, :, 5] =
5 5 5
5 5 5
# number of tiles in column direction
julia> mosaicview(A, ncol=2)
6×6 MosaicViews.MosaicView{Int64,4,...}:
1 1 1 4 4 4
1 1 1 4 4 4
2 2 2 5 5 5
2 2 2 5 5 5
3 3 3 0 0 0
3 3 3 0 0 0
# number of tiles in row direction
julia> mosaicview(A, nrow=2)
4×9 MosaicViews.MosaicView{Int64,4,...}:
1 1 1 3 3 3 5 5 5
1 1 1 3 3 3 5 5 5
2 2 2 4 4 4 0 0 0
2 2 2 4 4 4 0 0 0
# take a row-major order, i.e., tile-wise permute
julia> mosaicview(A, nrow=2, rowmajor=true)
4×9 MosaicViews.MosaicView{Int64,4,...}:
1 1 1 2 2 2 3 3 3
1 1 1 2 2 2 3 3 3
4 4 4 5 5 5 0 0 0
4 4 4 5 5 5 0 0 0
# add empty padding space between adjacent mosaic tiles
julia> mosaicview(A, nrow=2, npad=1, rowmajor=true)
5×11 MosaicViews.MosaicView{Int64,4,...}:
1 1 1 0 2 2 2 0 3 3 3
1 1 1 0 2 2 2 0 3 3 3
0 0 0 0 0 0 0 0 0 0 0
4 4 4 0 5 5 5 0 0 0 0
4 4 4 0 5 5 5 0 0 0 0
# fill spaces with -1
julia> mosaicview(A, fillvalue=-1, nrow=2, npad=1, rowmajor=true)
5×11 MosaicViews.MosaicView{Int64,4,...}:
1 1 1 -1 2 2 2 -1 3 3 3
1 1 1 -1 2 2 2 -1 3 3 3
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
4 4 4 -1 5 5 5 -1 -1 -1 -1
4 4 4 -1 5 5 5 -1 -1 -1 -1
```
### The `MosaicView` Type
The `MosaicView` constructor is simple and straightforward;
if you need more layout options, consider calling it indirectly
through `mosaicview`.
The layout of the mosaic is encoded in the third
(and optionally fourth) dimension. Creating a `MosaicView` this
way is type stable, non-copying, and should in general give
decent performance when accessed with `getindex`.
Let us look at a couple examples to see the type in action. If
`size(A)` is `(2,3,4)`, then the resulting `MosaicView` will have
the size `(2*4,3)` which is `(8,3)`.
```julia
julia> A = [k for i in 1:2, j in 1:3, k in 1:4]
2×3×4 Array{Int64,3}:
[:, :, 1] =
1 1 1
1 1 1
[:, :, 2] =
2 2 2
2 2 2
[:, :, 3] =
3 3 3
3 3 3
[:, :, 4] =
4 4 4
4 4 4
julia> MosaicView(A)
8×3 MosaicViews.MosaicView{Int64,3,Array{Int64,3}}:
1 1 1
1 1 1
2 2 2
2 2 2
3 3 3
3 3 3
4 4 4
4 4 4
```
Alternatively, `A` is also allowed to have four dimensions. More
concretely, if `size(A)` is `(2,3,4,5)`, then the resulting size
will be `(2*4,3*5)` which is `(8,15)`. For the sake of brevity
here is a slightly smaller example:
```julia
julia> A = [(k+1)*l-1 for i in 1:2, j in 1:3, k in 1:2, l in 1:2]
2×3×2×2 Array{Int64,4}:
[:, :, 1, 1] =
1 1 1
1 1 1
[:, :, 2, 1] =
2 2 2
2 2 2
[:, :, 1, 2] =
3 3 3
3 3 3
[:, :, 2, 2] =
5 5 5
5 5 5
julia> MosaicView(A)
4×6 MosaicViews.MosaicView{Int64,4,Array{Int64,4}}:
1 1 1 3 3 3
1 1 1 3 3 3
2 2 2 5 5 5
2 2 2 5 5 5
```
### Customizing promotion
When the inputs are heterogeneous, `mosaic` attempts to convert the elements of all input arrays to a common type;
if this promotion step throws an error, consider extending `MosaicViews.promote_wrapped_type` for your types.
`ImageCore` provides such extensions for colors defined in [ColorTypes](https://github.com/JuliaGraphics/ColorTypes.jl).
You will likely want to load that package if you're using MosaicViews with `Colorant` arrays.
(`ImageCore` gets loaded by nearly all the packages in the JuliaImages suite, so you may find that it is already loaded.)
[travis-img]: https://travis-ci.org/JuliaArrays/MosaicViews.jl.svg?branch=master
[travis-url]: https://travis-ci.org/JuliaArrays/MosaicViews.jl
[codecov-img]: http://codecov.io/github/JuliaArrays/MosaicViews.jl/coverage.svg?branch=master
[codecov-url]: http://codecov.io/github/JuliaArrays/MosaicViews.jl?branch=master
[pkgeval-img]: https://juliaci.github.io/NanosoldierReports/pkgeval_badges/M/MosaicViews.svg
[pkgeval-url]: https://juliaci.github.io/NanosoldierReports/pkgeval_badges/report.html
| MosaicViews | https://github.com/JuliaArrays/MosaicViews.jl.git |
|
[
"MIT"
] | 0.1.6 | 98e186b38d34174a6d8fc45f6906d6c007c7a4ad | code | 939 | using Documenter, Pesto, Makie, CairoMakie, ColorSchemes, DataFrames
CairoMakie.activate!(type="svg")
makedocs(
sitename="Pesto.jl",
authors = "Bjørn Tore Kopperud and Sebastian Höhna",
modules = [
Pesto,
isdefined(Base, :get_extension) ? Base.get_extension(Pesto, :PestoMakieExt) :
Pesto.PestoMakieExt
],
pages = [
"Home" => "index.md",
"Installation" => "install.md",
"Analyses" => [
"Simple analysis" => "analysis/simple.md",
"Extended analysis" => "analysis/extended.md",
"Number of shifts" => "analysis/shifts.md",
"Significant shifts" => "analysis/bayes_factors.md",
"Tip rates" => "analysis/tiprates.md",
"Plot with ggtree" => "plotting/ggtree.md"
],
"Functions" => "functions.md",
]
)
deploydocs(
repo = "github.com/kopperud/Pesto.jl.git",
)
| Pesto | https://github.com/kopperud/Pesto.jl.git |
|
[
"MIT"
] | 0.1.6 | 98e186b38d34174a6d8fc45f6906d6c007c7a4ad | code | 36765 | primates = SSEdata(NaN, Dict("Callicebus_oenanthe" => "?", "Varecia_variegata" => "?", "Cercocebus_torquatus" => "?", "Callicebus_modestus" => "?", "Procolobus_preussi" => "?", "Hylobates_concolor" => "?", "Saguinus_leucopus" => "?", "Pithecia_pithecia" => "?", "Saguinus_geoffroyi" => "?", "Aotus_hershkovitzi" => "?", "Hylobates_pileatus" => "?", "Presbytis_thomasi" => "?", "Callithrix_argentata" => "?", "Saimiri_ustus" => "?", "Semnopithecus_entellus" => "?", "Arctocebus_calabarensis" => "?", "Saimiri_vanzolinii" => "?", "Eulemur_fulvus" => "?", "Ateles_chamek" => "?", "Nasalis_larvatus" => "?", "Eulemur_coronatus" => "?", "Callithrix_geoffroyi" => "?", "Otolemur_crassicaudatus" => "?", "Cercopithecus_solatus" => "?", "Cercopithecus_nictitans" => "?", "Leontopithecus_rosalia" => "?", "Aotus_trivirgatus" => "?", "Presbytis_comata" => "?", "Macaca_nemestrina" => "?", "Cercopithecus_lhoesti" => "?", "Microcebus_murinus" => "?", "Macaca_thibetana" => "?", "Macaca_silenus" => "?", "Avahi_laniger" => "?", "Macaca_mulatta" => "?", "Arctocebus_aureus" => "?", "Saimiri_sciureus" => "?", "Pan_paniscus" => "?", "Cercopithecus_petaurista" => "?", "Lepilemur_septentrionalis" => "?", "Daubentonia_madagascariensis" => "?", "Callicebus_caligatus" => "?", "Otolemur_garnettii" => "?", "Trachypithecus_geei" => "?", "Cercopithecus_erythrotis" => "?", "Hapalemur_aureus" => "?", "Cercopithecus_campbelli" => "?", "Colobus_angolensis" => "?", "Saguinus_mystax" => "?", "Leontopithecus_caissara" => "?", "Pithecia_albicans" => "?", "Callicebus_dubius" => "?", "Callicebus_torquatus" => "?", "Hylobates_gabriellae" => "?", "Presbytis_frontata" => "?", "Aotus_nancymaae" => "?", "Macaca_sylvanus" => "?", "Cercopithecus_mitis" => "?", "Hylobates_klossii" => "?", "Macaca_maura" => "?", "Pithecia_aequatorialis" => "?", "Macaca_tonkeana" => "?", "Callicebus_hoffmannsi" => "?", "Papio_hamadryas" => "?", "Trachypithecus_obscurus" => "?", "Cercopithecus_sclateri" => "?", "Erythrocebus_patas" => "?", "Microcebus_coquereli" => "?", "Cacajao_calvus" => "?", "Cercopithecus_hamlyni" => "?", "Alouatta_coibensis" => "?", "Chlorocebus_aethiops" => "?", "Aotus_lemurinus" => "?", "Microcebus_rufus" => "?", "Callithrix_kuhlii" => "?", "Trachypithecus_auratus" => "?", "Cercopithecus_cephus" => "?", "Galago_gallarum" => "?", "Trachypithecus_phayrei" => "?", "Saguinus_bicolor" => "?", "Alouatta_seniculus" => "?", "Ateles_belzebuth" => "?", "Callicebus_olallae" => "?", "Colobus_satanas" => "?", "Cercocebus_agilis" => "?", "Cercopithecus_preussi" => "?", "Hylobates_hoolock" => "?", "Colobus_polykomos" => "?", "Presbytis_melalophos" => "?", "Trachypithecus_cristatus" => "?", "Eulemur_mongoz" => "?", "Lepilemur_dorsalis" => "?", "Lepilemur_microdon" => "?", "Pithecia_monachus" => "?", "Callithrix_humeralifera" => "?", "Pygathrix_brelichi" => "?", "Galago_matschiei" => "?", "Callicebus_brunneus" => "?", "Propithecus_tattersalli" => "?", "Allocebus_trichotis" => "?", "Chiropotes_satanas" => "?", "Cebus_olivaceus" => "?", "Alouatta_pigra" => "?", "Saguinus_fuscicollis" => "?", "Saguinus_midas" => "?", "Euoticus_elegantulus" => "?", "Lepilemur_ruficaudatus" => "?", "Cercopithecus_dryas" => "?", "Ateles_fusciceps" => "?", "Aotus_miconax" => "?", "Macaca_fascicularis" => "?", "Presbytis_potenziani" => "?", "Pygathrix_roxellana" => "?", "Eulemur_macaco" => "?", "Aotus_infulatus" => "?", "Propithecus_verreauxi" => "?", "Nycticebus_coucang" => "?", "Pongo_pygmaeus" => "?", "Perodicticus_potto" => "?", "Pithecia_irrorata" => "?", "Trachypithecus_pileatus" => "?", "Pan_troglodytes" => "?", "Hylobates_lar" => "?", "Hylobates_muelleri" => "?", "Ateles_geoffroyi" => "?", "Nycticebus_pygmaeus" => "?", "Galago_senegalensis" => "?", "Cercopithecus_neglectus" => "?", "Loris_tardigradus" => "?", "Allenopithecus_nigroviridis" => "?", "Saguinus_imperator" => "?", "Lagothrix_flavicauda" => "?", "Saimiri_oerstedii" => "?", "Saguinus_tripartitus" => "?", "Macaca_arctoides" => "?", "Hapalemur_griseus" => "?", "Gorilla_gorilla" => "?", "Cercopithecus_pogonias" => "?", "Trachypithecus_francoisi" => "?", "Procolobus_pennantii" => "?", "Galagoides_demidoff" => "?", "Lemur_catta" => "?", "Hylobates_agilis" => "?", "Homo_sapiens" => "?", "Callicebus_donacophilus" => "?", "Cercocebus_galeritus" => "?", "Procolobus_rufomitratus" => "?", "Macaca_cyclopis" => "?", "Cebus_capucinus" => "?", "Hylobates_syndactylus" => "?", "Pygathrix_avunculus" => "?", "Propithecus_diadema" => "?", "Cacajao_melanocephalus" => "?", "Galago_moholi" => "?", "Lophocebus_albigena" => "?", "Alouatta_caraya" => "?", "Macaca_ochreata" => "?", "Lepilemur_edwardsi" => "?", "Saguinus_inustus" => "?", "Galago_alleni" => "?", "Pygathrix_bieti" => "?", "Cebus_albifrons" => "?", "Ateles_marginatus" => "?", "Callicebus_moloch" => "?", "Callithrix_aurita" => "?", "Ateles_paniscus" => "?", "Aotus_azarai" => "?", "Callicebus_cinerascens" => "?", "Saguinus_nigricollis" => "?", "Presbytis_hosei" => "?", "Aotus_nigriceps" => "?", "Tarsius_pumilus" => "?", "Callithrix_jacchus" => "?", "Alouatta_belzebul" => "?", "Cebuella_pygmaea" => "?", "Cheirogaleus_major" => "?", "Phaner_furcifer" => "?", "Saimiri_boliviensis" => "?", "Macaca_nigra" => "?", "Theropithecus_gelada" => "?", "Galagoides_zanzibaricus" => "?", "Miopithecus_talapoin" => "?", "Lepilemur_leucopus" => "?", "Macaca_sinica" => "?", "Presbytis_femoralis" => "?", "Callimico_goeldii" => "?", "Procolobus_badius" => "?", "Callithrix_penicillata" => "?", "Chiropotes_albinasus" => "?", "Alouatta_fusca" => "?", "Procolobus_verus" => "?", "Aotus_vociferans" => "?", "Presbytis_rubicunda" => "?", "Mandrillus_leucophaeus" => "?", "Macaca_assamensis" => "?", "Macaca_radiata" => "?", "Pygathrix_nemaeus" => "?", "Tarsius_dianae" => "?", "Macaca_fuscata" => "?", "Leontopithecus_chrysopygus" => "?", "Lagothrix_lagotricha" => "?", "Aotus_brumbacki" => "?", "Alouatta_sara" => "?", "Hylobates_moloch" => "?", "Cercopithecus_mona" => "?", "Callithrix_flaviceps" => "?", "Lepilemur_mustelinus" => "?", "Callicebus_personatus" => "?", "Indri_indri" => "?", "Leontopithecus_chrysomelas" => "?", "Trachypithecus_johnii" => "?", "Cercopithecus_erythrogaster" => "?", "Saguinus_oedipus" => "?", "Colobus_guereza" => "?", "Eulemur_rubriventer" => "?", "Cheirogaleus_medius" => "?", "Tarsius_bancanus" => "?", "Tarsius_spectrum" => "?", "Saguinus_labiatus" => "?", "Nasalis_concolor" => "?", "Alouatta_palliata" => "?", "Trachypithecus_vetulus" => "?", "Mandrillus_sphinx" => "?", "Callicebus_cupreus" => "?", "Brachyteles_arachnoides" => "?", "Cebus_apella" => "?", "Hylobates_leucogenys" => "?", "Euoticus_pallidus" => "?", "Cercopithecus_wolfi" => "?", "Cercopithecus_ascanius" => "?", "Tarsius_syrichta" => "?", "Hapalemur_simus" => "?", "Cercopithecus_diana" => "?"), [234 235; 235 236; 236 237; 237 238; 238 1; 238 239; 239 2; 239 3; 237 240; 240 241; 241 4; 241 5; 240 242; 242 243; 243 6; 243 244; 244 245; 245 7; 245 8; 244 9; 242 246; 246 10; 246 11; 236 247; 247 248; 248 12; 248 249; 249 13; 249 14; 247 250; 250 15; 250 251; 251 16; 251 17; 235 252; 252 18; 252 253; 253 254; 254 255; 255 256; 256 19; 256 257; 257 20; 257 258; 258 21; 258 259; 259 22; 259 260; 260 261; 261 23; 261 24; 260 25; 255 262; 262 263; 263 26; 263 264; 264 265; 265 27; 265 28; 264 29; 262 266; 266 30; 266 267; 267 268; 268 31; 268 32; 267 269; 269 33; 269 270; 270 34; 270 35; 254 271; 271 36; 271 272; 272 37; 272 273; 273 38; 273 274; 274 39; 274 40; 253 275; 275 41; 275 276; 276 277; 277 42; 277 278; 278 43; 278 279; 279 44; 279 45; 276 280; 280 46; 280 47; 234 281; 281 282; 282 283; 283 284; 284 48; 284 49; 283 50; 282 285; 285 51; 285 52; 281 286; 286 287; 287 288; 288 289; 289 290; 290 291; 291 292; 292 53; 292 54; 291 293; 293 55; 293 56; 290 294; 294 57; 294 295; 295 58; 295 296; 296 59; 296 297; 297 60; 297 61; 289 298; 298 62; 298 299; 299 300; 300 63; 300 301; 301 302; 302 64; 302 65; 301 66; 299 303; 303 67; 303 304; 304 305; 305 68; 305 306; 306 69; 306 70; 304 307; 307 308; 308 71; 308 72; 307 309; 309 73; 309 74; 288 310; 310 311; 311 312; 312 75; 312 76; 311 313; 313 77; 313 314; 314 78; 314 315; 315 79; 315 316; 316 317; 317 80; 317 81; 316 82; 310 318; 318 319; 319 320; 320 83; 320 321; 321 84; 321 322; 322 85; 322 86; 319 323; 323 87; 323 88; 318 324; 324 89; 324 325; 325 90; 325 91; 287 326; 326 327; 327 328; 328 329; 329 92; 329 93; 328 330; 330 94; 330 95; 327 331; 331 96; 331 332; 332 97; 332 333; 333 98; 333 334; 334 99; 334 100; 326 335; 335 336; 336 337; 337 101; 337 102; 336 338; 338 339; 339 103; 339 104; 338 340; 340 105; 340 341; 341 106; 341 342; 342 107; 342 343; 343 108; 343 344; 344 109; 344 110; 335 345; 345 346; 346 111; 346 347; 347 348; 348 112; 348 349; 349 113; 349 114; 347 350; 350 351; 351 115; 351 116; 350 352; 352 353; 353 117; 353 118; 352 354; 354 119; 354 120; 345 355; 355 356; 356 121; 356 357; 357 122; 357 358; 358 123; 358 124; 355 359; 359 360; 360 361; 361 125; 361 126; 360 362; 362 127; 362 128; 359 363; 363 129; 363 364; 364 365; 365 130; 365 366; 366 131; 366 132; 364 367; 367 133; 367 368; 368 134; 368 369; 369 135; 369 136; 286 370; 370 371; 371 372; 372 137; 372 373; 373 138; 373 374; 374 139; 374 375; 375 140; 375 141; 371 376; 376 377; 377 142; 377 378; 378 379; 379 143; 379 380; 380 144; 380 145; 378 381; 381 146; 381 382; 382 147; 382 148; 376 383; 383 149; 383 384; 384 150; 384 385; 385 151; 385 152; 370 386; 386 387; 387 388; 388 389; 389 390; 390 391; 391 392; 392 153; 392 393; 393 154; 393 394; 394 155; 394 395; 395 156; 395 396; 396 157; 396 397; 397 158; 397 159; 391 160; 390 398; 398 399; 399 400; 400 161; 400 401; 401 402; 402 162; 402 163; 401 403; 403 404; 404 164; 404 165; 403 405; 405 166; 405 167; 399 168; 398 406; 406 169; 406 170; 389 407; 407 171; 407 408; 408 172; 408 409; 409 173; 409 410; 410 174; 410 175; 388 411; 411 176; 411 177; 387 412; 412 413; 413 414; 414 178; 414 179; 413 415; 415 180; 415 416; 416 181; 416 182; 412 417; 417 183; 417 418; 418 184; 418 419; 419 185; 419 186; 386 420; 420 421; 421 422; 422 423; 423 424; 424 187; 424 188; 423 425; 425 189; 425 426; 426 190; 426 191; 422 427; 427 192; 427 428; 428 193; 428 194; 421 429; 429 195; 429 430; 430 431; 431 432; 432 433; 433 196; 433 197; 432 434; 434 198; 434 199; 431 435; 435 200; 435 201; 430 436; 436 437; 437 202; 437 438; 438 439; 439 203; 439 204; 438 440; 440 205; 440 206; 436 441; 441 207; 441 442; 442 208; 442 443; 443 209; 443 210; 420 444; 444 211; 444 445; 445 212; 445 446; 446 447; 447 213; 447 214; 446 448; 448 449; 449 215; 449 450; 450 216; 450 217; 448 451; 451 218; 451 452; 452 453; 453 219; 453 454; 454 455; 455 220; 455 221; 454 456; 456 222; 456 223; 452 457; 457 458; 458 459; 459 460; 460 461; 461 224; 461 225; 460 462; 462 226; 462 227; 459 463; 463 228; 463 229; 458 464; 464 230; 464 231; 457 465; 465 232; 465 233], ["Galago_matschiei", "Euoticus_pallidus", "Euoticus_elegantulus", "Galagoides_zanzibaricus", "Galagoides_demidoff", "Galago_alleni", "Galago_senegalensis", "Galago_moholi", "Galago_gallarum", "Otolemur_garnettii", "Otolemur_crassicaudatus", "Perodicticus_potto", "Arctocebus_aureus", "Arctocebus_calabarensis", "Loris_tardigradus", "Nycticebus_pygmaeus", "Nycticebus_coucang", "Daubentonia_madagascariensis", "Lepilemur_mustelinus", "Lepilemur_septentrionalis", "Lepilemur_ruficaudatus", "Lepilemur_leucopus", "Lepilemur_edwardsi", "Lepilemur_microdon", "Lepilemur_dorsalis", "Lemur_catta", "Hapalemur_aureus", "Hapalemur_simus", "Hapalemur_griseus", "Varecia_variegata", "Eulemur_rubriventer", "Eulemur_mongoz", "Eulemur_coronatus", "Eulemur_macaco", "Eulemur_fulvus", "Avahi_laniger", "Indri_indri", "Propithecus_diadema", "Propithecus_verreauxi", "Propithecus_tattersalli", "Phaner_furcifer", "Allocebus_trichotis", "Microcebus_coquereli", "Microcebus_rufus", "Microcebus_murinus", "Cheirogaleus_medius", "Cheirogaleus_major", "Tarsius_dianae", "Tarsius_pumilus", "Tarsius_spectrum", "Tarsius_syrichta", "Tarsius_bancanus", "Chiropotes_satanas", "Chiropotes_albinasus", "Cacajao_melanocephalus", "Cacajao_calvus", "Pithecia_pithecia", "Pithecia_irrorata", "Pithecia_aequatorialis", "Pithecia_monachus", "Pithecia_albicans", "Callicebus_torquatus", "Callicebus_modestus", "Callicebus_oenanthe", "Callicebus_olallae", "Callicebus_donacophilus", "Callicebus_personatus", "Callicebus_dubius", "Callicebus_cupreus", "Callicebus_caligatus", "Callicebus_hoffmannsi", "Callicebus_brunneus", "Callicebus_moloch", "Callicebus_cinerascens", "Alouatta_palliata", "Alouatta_coibensis", "Alouatta_caraya", "Alouatta_pigra", "Alouatta_fusca", "Alouatta_seniculus", "Alouatta_sara", "Alouatta_belzebul", "Ateles_paniscus", "Ateles_belzebuth", "Ateles_geoffroyi", "Ateles_fusciceps", "Ateles_marginatus", "Ateles_chamek", "Brachyteles_arachnoides", "Lagothrix_lagotricha", "Lagothrix_flavicauda", "Cebus_olivaceus", "Cebus_apella", "Cebus_capucinus", "Cebus_albifrons", "Saimiri_boliviensis", "Saimiri_vanzolinii", "Saimiri_sciureus", "Saimiri_ustus", "Saimiri_oerstedii", "Aotus_lemurinus", "Aotus_hershkovitzi", "Aotus_vociferans", "Aotus_brumbacki", "Aotus_nancymaae", "Aotus_miconax", "Aotus_infulatus", "Aotus_nigriceps", "Aotus_trivirgatus", "Aotus_azarai", "Callimico_goeldii", "Cebuella_pygmaea", "Callithrix_humeralifera", "Callithrix_argentata", "Callithrix_flaviceps", "Callithrix_aurita", "Callithrix_kuhlii", "Callithrix_geoffroyi", "Callithrix_penicillata", "Callithrix_jacchus", "Leontopithecus_chrysomelas", "Leontopithecus_caissara", "Leontopithecus_rosalia", "Leontopithecus_chrysopygus", "Saguinus_oedipus", "Saguinus_geoffroyi", "Saguinus_midas", "Saguinus_bicolor", "Saguinus_leucopus", "Saguinus_imperator", "Saguinus_mystax", "Saguinus_labiatus", "Saguinus_inustus", "Saguinus_nigricollis", "Saguinus_tripartitus", "Saguinus_fuscicollis", "Pongo_pygmaeus", "Gorilla_gorilla", "Homo_sapiens", "Pan_troglodytes", "Pan_paniscus", "Hylobates_hoolock", "Hylobates_moloch", "Hylobates_lar", "Hylobates_agilis", "Hylobates_pileatus", "Hylobates_muelleri", "Hylobates_klossii", "Hylobates_syndactylus", "Hylobates_gabriellae", "Hylobates_leucogenys", "Hylobates_concolor", "Trachypithecus_geei", "Trachypithecus_auratus", "Trachypithecus_francoisi", "Trachypithecus_cristatus", "Trachypithecus_pileatus", "Trachypithecus_phayrei", "Trachypithecus_obscurus", "Semnopithecus_entellus", "Presbytis_potenziani", "Presbytis_comata", "Presbytis_frontata", "Presbytis_femoralis", "Presbytis_melalophos", "Presbytis_rubicunda", "Presbytis_thomasi", "Presbytis_hosei", "Trachypithecus_johnii", "Trachypithecus_vetulus", "Pygathrix_nemaeus", "Pygathrix_avunculus", "Pygathrix_roxellana", "Pygathrix_brelichi", "Pygathrix_bieti", "Nasalis_larvatus", "Nasalis_concolor", "Procolobus_preussi", "Procolobus_verus", "Procolobus_rufomitratus", "Procolobus_pennantii", "Procolobus_badius", "Colobus_satanas", "Colobus_angolensis", "Colobus_polykomos", "Colobus_guereza", "Mandrillus_sphinx", "Mandrillus_leucophaeus", "Cercocebus_torquatus", "Cercocebus_agilis", "Cercocebus_galeritus", "Lophocebus_albigena", "Theropithecus_gelada", "Papio_hamadryas", "Macaca_sylvanus", "Macaca_tonkeana", "Macaca_maura", "Macaca_ochreata", "Macaca_nigra", "Macaca_silenus", "Macaca_nemestrina", "Macaca_arctoides", "Macaca_radiata", "Macaca_assamensis", "Macaca_thibetana", "Macaca_sinica", "Macaca_fascicularis", "Macaca_fuscata", "Macaca_mulatta", "Macaca_cyclopis", "Allenopithecus_nigroviridis", "Miopithecus_talapoin", "Erythrocebus_patas", "Chlorocebus_aethiops", "Cercopithecus_solatus", "Cercopithecus_preussi", "Cercopithecus_lhoesti", "Cercopithecus_hamlyni", "Cercopithecus_neglectus", "Cercopithecus_mona", "Cercopithecus_campbelli", "Cercopithecus_wolfi", "Cercopithecus_pogonias", "Cercopithecus_erythrotis", "Cercopithecus_sclateri", "Cercopithecus_cephus", "Cercopithecus_ascanius", "Cercopithecus_petaurista", "Cercopithecus_erythrogaster", "Cercopithecus_nictitans", "Cercopithecus_mitis", "Cercopithecus_dryas", "Cercopithecus_diana"], [2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 0.0, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 9.999999974752427e-7, 9.999999974752427e-7, 9.999999974752427e-7, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 2.00000000916134e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 1.9999999949504854e-6, 9.999999974752427e-7, 9.999999974752427e-7, 1.9999999949504854e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 1.9999999949504854e-6, 2.999999992425728e-6, 2.999999992425728e-6, 3.000000006636583e-6, 3.000000006636583e-6, 1.9999999949504854e-6, 3.000000006636583e-6, 3.000000006636583e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 3.000000006636583e-6, 3.000000006636583e-6, 9.999999974752427e-7, 9.999999974752427e-7, 9.999999974752427e-7, 1.9999999949504854e-6, 1.9999999949504854e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 2.00000000916134e-6, 2.00000000916134e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.9999999949504854e-6, 2.00000000916134e-6, 3.000000006636583e-6, 3.000000006636583e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 3.000000006636583e-6, 3.000000006636583e-6, 2.00000000916134e-6, 2.00000000916134e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 9.999999974752427e-7, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 9.999999974752427e-7, 9.999999974752427e-7, 2.00000000916134e-6, 2.00000000916134e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 2.00000000916134e-6, 1.0000000116860974e-6, 9.999999974752427e-7, 9.999999974752427e-7, 9.999999974752427e-7, 9.999999974752427e-7, 9.999999974752427e-7, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 9.999999974752427e-7, 9.999999974752427e-7, 1.0000000116860974e-6, 1.0000000116860974e-6, 9.999999974752427e-7, 9.999999974752427e-7, 9.999999974752427e-7, 9.999999974752427e-7, 9.999999974752427e-7, 9.999999974752427e-7, 9.999999974752427e-7, 9.999999974752427e-7, 9.999999974752427e-7, 9.999999974752427e-7, 1.9999999949504854e-6, 2.00000000916134e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 2.00000000916134e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 9.999999974752427e-7, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 1.9999999949504854e-6, 2.00000000916134e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 2.00000000916134e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 9.999999974752427e-7, 1.0000000116860974e-6, 9.999999974752427e-7, 9.999999974752427e-7, 9.999999974752427e-7, 9.999999974752427e-7, 2.00000000916134e-6, 2.00000000916134e-6, 9.999999974752427e-7, 9.999999974752427e-7, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 1.0000000116860974e-6, 2.00000000916134e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 2.00000000916134e-6, 2.00000000916134e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 9.999999974752427e-7, 9.999999974752427e-7, 1.9999999949504854e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 3.000000006636583e-6, 3.000000006636583e-6, 3.000000006636583e-6, 3.000000006636583e-6, 3.000000006636583e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 3.000000006636583e-6, 3.000000006636583e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 1.9999999949504854e-6, 1.9999999949504854e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 3.000000006636583e-6, 3.000000006636583e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 2.00000000916134e-6, 65.091688, 51.870523000000006, 20.92240600000001, 8.52069800000001, 3.64498600000001, 1.4498640000000123, 7.327327000000011, 3.57410500000001, 6.316186000000009, 5.403509000000007, 2.3643440000000098, 1.4611580000000117, 4.24264500000001, 16.341330000000006, 4.703630000000004, 1.8817730000000026, 9.200567000000007, 2.6724780000000052, 39.482387, 27.516181000000003, 22.03481, 17.172241999999997, 11.199942999999998, 7.600898000000001, 5.028629000000002, 3.0029320000000013, 1.3631650000000022, 0.4532300000000049, 13.059531, 6.22907, 5.662447, 5.342910000000003, 11.892740000000003, 6.376585000000006, 3.074037000000004, 3.756912000000007, 3.6840170000000043, 13.616796999999998, 7.435237999999998, 5.879827999999996, 1.5414209999999997, 17.035442000000003, 11.470708000000002, 7.053373000000001, 4.099788000000004, 2.7374220000000022, 9.400472, 62.60012400000001, 35.182478, 15.30095, 0.2574249999999978, 8.683556000000003, 51.22504000000001, 30.152181000000006, 25.316291000000007, 20.972604000000004, 11.880747000000007, 8.122419000000008, 3.688448000000008, 3.7005120000000105, 10.233818000000007, 6.5495850000000075, 3.7898950000000085, 1.7556310000000082, 16.471412, 13.302978000000003, 7.328816000000003, 3.0858110000000067, 0.6554110000000009, 10.303458000000006, 8.00036500000001, 4.724423000000009, 1.9827720000000113, 5.327293000000012, 2.3505500000000126, 2.323719000000011, 20.683504000000006, 12.729946000000005, 0.5022640000000109, 6.9263560000000055, 4.5273070000000075, 2.9470970000000065, 1.7586880000000065, 0.7931460000000072, 11.286511000000004, 4.692949000000006, 3.038406000000009, 2.0373540000000077, 1.215005000000005, 2.885072000000008, 9.030846000000004, 2.921912000000006, 25.692878000000007, 23.186569000000006, 7.2623920000000055, 3.9637810000000044, 0.1411860000000047, 8.420868000000006, 6.8241370000000074, 3.9430110000000056, 1.7653140000000036, 22.028906000000006, 15.787338000000005, 3.416193000000007, 11.746098000000003, 4.522717, 8.668909000000006, 6.289868000000006, 4.164303000000004, 2.6315250000000034, 1.237796000000003, 18.987616000000003, 18.376969000000003, 11.36504, 3.7060110000000037, 2.0792040000000043, 11.200240999999998, 3.433357000000001, 0.5105470000000025, 0.26554400000000555, 0.25634900000000016, 15.646321, 7.678649, 6.625269000000003, 3.837339, 12.682662999999998, 7.878203999999997, 3.279131999999997, 3.3092069999999936, 9.818773, 7.678029000000002, 4.386891000000006, 1.953794000000002, 5.079925000000003, 3.0388599999999997, 1.3831989999999976, 32.417033, 19.636878000000003, 15.844635000000004, 6.414494000000005, 5.067397000000007, 1.9536930000000083, 5.875858000000001, 4.011581, 3.5390080000000026, 3.516834000000003, 3.1008390000000006, 2.029258000000006, 0.8819289999999995, 4.0189710000000005, 1.8065300000000022, 0.7882440000000059, 18.336766000000004, 13.606692000000002, 8.892183000000003, 8.886603000000001, 6.929667000000002, 6.808640000000004, 5.122404000000003, 3.734664000000002, 2.896938000000006, 0.7948489999999993, 0.7521189999999933, 0.6943409999999943, 6.5521050000000045, 3.8073950000000067, 1.8942810000000065, 1.588858000000009, 0.22076400000000262, 1.5507350000000102, 0.527183000000008, 0.5324450000000098, 5.1993870000000015, 8.001421999999998, 3.8710939999999994, 2.609690999999998, 1.0378479999999968, 3.483866000000006, 11.958092, 5.260843999999999, 3.495014999999995, 4.3132559999999955, 1.7550819999999945, 6.037348999999999, 3.4905729999999977, 0.8052469999999943, 11.254554000000006, 9.240851000000006, 6.710008000000009, 5.442186000000007, 1.9658940000000058, 2.3130990000000082, 0.5032410000000027, 2.2280430000000067, 1.5427510000000098, 7.491109000000009, 6.256468000000005, 3.5807140000000075, 2.6482660000000067, 0.13527900000001125, 1.6532970000000091, 2.2447700000000097, 4.432664000000003, 2.8010050000000035, 1.7081400000000002, 0.7161270000000002, 1.0553840000000037, 2.229598000000003, 0.9127500000000026, 0.26901300000000106, 8.282987000000006, 6.749637000000007, 5.725099000000007, 5.180709000000007, 4.912274000000011, 0.8414430000000124, 0.008620000000007622, 4.215323000000012, 3.6333350000000095, 2.5588360000000065, 1.2402630000000059, 0.7864810000000091, 0.778255999999999, 2.953573000000013, 2.322970000000012, 1.7085810000000095, 1.0323000000000064, 0.5027220000000057, 0.4886510000000044, 0.7530200000000065, 0.9547770000000071, 1.2344600000000128], 0.635, [13.221165, 30.948117, 12.401708, 4.875712, 3.644984, 2.195122, 1.449862, 1.449862, 1.193371, 3.753222, 3.574104, 3.574104, 1.011141, 0.912677, 5.403509, 3.039165, 0.903186, 1.461157, 1.461157, 2.364343, 2.073541, 4.242644, 4.242644, 4.581076, 11.6377, 4.703629, 2.821857, 1.881772, 1.881772, 7.140763, 9.200566, 6.528089, 2.672477, 2.672477, 12.388136, 39.482385, 11.966206, 5.481371, 4.862568, 5.972299, 11.199941, 3.599045, 7.600896, 2.572269, 5.028627, 2.025697, 3.00293, 1.639767, 0.909935, 0.453228, 0.453228, 1.363163, 4.112711, 6.830461, 6.229068, 0.566623, 0.319537, 5.342909, 5.342909, 5.662445, 1.166791, 11.892738, 5.516155, 3.302548, 3.074035, 3.074035, 2.619673, 3.75691, 0.072895, 3.684015, 3.684015, 8.418013, 13.616795, 6.181559, 7.435235, 1.55541, 5.879825, 4.338407, 1.541418, 1.541418, 10.480739, 17.03544, 5.564734, 4.417335, 7.05337, 2.953585, 4.099785, 1.362366, 2.73742, 2.73742, 2.070236, 9.400469, 9.400469, 2.491564, 27.417646, 19.881528, 15.043525, 0.257424, 0.257424, 15.300949, 26.498922, 8.683554, 8.683554, 11.375084, 21.072859, 4.83589, 4.343687, 9.091857, 3.758328, 4.433971, 3.688447, 3.688447, 4.421907, 3.700511, 3.700511, 1.646929, 10.233817, 3.684233, 6.549583, 2.75969, 3.789893, 2.034264, 1.75563, 1.75563, 4.501192, 16.47141, 3.168434, 5.974162, 7.328814, 4.243005, 2.4304, 0.655408, 0.655408, 3.085809, 2.99952, 10.303456, 2.303093, 3.275942, 4.724421, 2.741651, 1.98277, 1.98277, 2.673072, 2.976743, 2.350547, 2.350547, 3.003574, 2.323717, 2.323717, 4.632787, 7.953558, 12.227682, 0.502263, 0.502263, 5.80359, 6.926355, 2.399049, 4.527306, 1.58021, 2.947096, 1.188409, 0.965542, 0.793145, 0.793145, 1.758687, 9.396993, 6.593562, 1.654543, 3.038405, 1.001052, 2.037353, 0.822349, 1.215004, 1.215004, 1.807877, 2.885071, 2.885071, 2.255665, 9.030845, 6.108934, 2.92191, 2.92191, 4.459303, 2.506309, 15.924177, 3.298611, 3.963779, 3.963779, 7.121206, 0.141184, 0.141184, 14.765701, 8.420866, 1.596731, 6.824135, 2.881126, 3.943009, 2.177697, 1.765313, 1.765313, 3.663972, 6.241568, 12.371145, 3.416191, 3.416191, 4.04124, 7.223381, 4.522715, 4.522715, 3.077189, 8.668907, 2.379041, 6.289867, 2.125565, 4.164302, 1.532778, 2.631524, 1.393729, 1.237795, 1.237795, 3.04129, 0.610647, 18.376968, 7.011929, 7.659029, 3.706009, 1.626807, 2.079202, 2.079202, 0.164799, 7.766884, 3.433356, 3.433356, 10.689694, 0.245003, 0.265543, 0.265543, 0.254198, 0.256348, 0.256348, 3.341295, 7.967672, 7.678648, 1.05338, 6.625268, 2.78793, 3.837338, 3.837338, 2.963658, 4.804459, 4.599072, 3.279131, 3.279131, 4.568997, 3.309206, 3.309206, 2.86389, 9.818771, 2.140744, 3.291138, 4.386889, 2.433097, 1.953792, 1.953792, 2.598104, 5.079923, 2.041065, 3.038858, 1.655661, 1.383197, 1.383197, 18.808007, 12.780155, 3.792243, 15.844634, 9.430141, 6.414492, 1.347097, 5.067395, 3.113704, 1.953691, 1.953691, 13.76102, 1.864277, 4.011579, 0.472573, 0.022174, 3.516832, 0.415995, 3.100837, 3.100837, 1.50975, 2.029256, 1.147329, 0.881927, 0.881927, 1.856887, 4.018969, 2.212441, 1.806528, 1.018286, 0.788242, 0.788242, 14.080267, 4.730074, 4.714509, 0.00558, 1.956936, 0.121027, 1.686236, 5.122402, 1.38774, 3.734663, 0.837726, 2.896937, 2.102089, 0.794848, 0.04273, 0.752118, 0.057778, 0.69434, 0.69434, 6.808638, 0.377562, 2.74471, 1.913114, 1.894279, 0.305423, 1.368094, 0.220763, 0.220763, 0.038123, 1.023552, 0.527182, 0.527182, 1.01829, 0.532444, 0.532444, 3.807393, 1.352718, 5.199385, 5.199385, 0.885181, 8.00142, 4.130328, 3.871092, 1.261403, 2.609689, 1.571843, 1.037846, 1.037846, 5.408317, 3.483864, 3.483864, 1.6486, 6.697248, 1.765829, 3.495013, 3.495013, 0.947588, 4.313254, 2.558174, 1.755081, 1.755081, 5.920743, 6.037347, 2.546776, 3.490571, 2.685326, 0.805245, 0.805245, 7.082212, 2.013703, 2.530843, 1.267822, 3.476292, 1.965891, 1.965891, 3.129087, 2.313096, 1.809858, 0.503238, 0.503238, 4.481965, 2.228041, 0.685292, 1.542749, 1.542749, 1.749742, 7.491107, 1.234641, 2.675754, 0.932448, 2.512987, 0.135277, 0.135277, 0.994969, 1.653295, 1.653295, 1.335944, 2.244768, 2.244768, 1.823804, 1.631659, 2.801003, 1.092865, 0.992013, 0.716125, 0.716125, 0.652756, 1.055382, 1.055382, 2.203066, 2.229596, 1.316848, 0.912748, 0.643737, 0.269011, 0.269011, 2.971567, 8.282985, 1.53335, 6.749635, 1.024538, 0.54439, 5.180707, 5.180707, 0.812825, 4.070831, 0.841441, 0.832823, 0.008617, 0.008617, 0.696951, 4.215321, 0.581988, 1.074499, 2.558834, 1.318573, 0.453782, 0.786479, 0.786479, 0.462007, 0.778254, 0.778254, 0.679762, 0.630603, 0.614389, 0.676281, 0.529578, 0.50272, 0.50272, 0.543649, 0.488649, 0.488649, 0.955561, 0.753017, 0.753017, 1.368193, 0.954775, 0.954775, 1.719113, 1.234458, 1.234458], [65.091686, 51.870521, 20.922404, 8.520696000000001, 3.644984000000001, 1.449862000000003, 7.327325000000002, 3.574103000000001, 6.316184, 5.403506999999998, 2.3643420000000006, 1.4611560000000026, 4.242643000000001, 16.341327999999997, 4.703627999999995, 1.8817709999999934, 9.200564999999997, 2.672475999999996, 39.482384999999994, 27.516178999999994, 22.03480799999999, 17.172239999999988, 11.199940999999988, 7.600895999999992, 5.028626999999993, 3.002929999999992, 1.363162999999993, 0.45322799999999575, 13.05952899999999, 6.229067999999991, 5.662444999999991, 5.342907999999994, 11.892737999999994, 6.376582999999997, 3.074034999999995, 3.7569099999999978, 3.684014999999995, 13.616794999999989, 7.435235999999989, 5.879825999999987, 1.5414189999999905, 17.035439999999994, 11.470705999999993, 7.053370999999991, 4.099785999999995, 2.737419999999993, 9.400469999999991, 62.600122, 35.182475999999994, 15.300947999999991, 0.25742299999998863, 8.683553999999994, 51.225038, 30.152178999999997, 25.316288999999998, 20.972601999999995, 11.880744999999997, 8.122416999999999, 3.688445999999999, 3.7005100000000013, 10.233815999999997, 6.549582999999998, 3.7898929999999993, 1.755628999999999, 16.47140999999999, 13.302975999999994, 7.328813999999994, 3.0858089999999976, 0.6554089999999917, 10.303455999999997, 8.000363, 4.7244209999999995, 1.9827700000000021, 5.327291000000002, 2.3505480000000034, 2.323717000000002, 20.683501999999997, 12.729943999999996, 0.5022620000000018, 6.926353999999996, 4.527304999999998, 2.9470949999999974, 1.7586859999999973, 0.7931439999999981, 11.286508999999995, 4.692946999999997, 3.038404, 2.0373519999999985, 1.2150029999999958, 2.885069999999999, 9.030843999999995, 2.921909999999997, 25.692876, 23.186566999999997, 7.262389999999996, 3.9637789999999953, 0.14118399999999554, 8.420865999999997, 6.824134999999998, 3.9430089999999964, 1.7653119999999944, 22.028903999999997, 15.787335999999996, 3.4161909999999978, 11.746095999999994, 4.522714999999991, 8.668906999999997, 6.289865999999996, 4.164300999999995, 2.6315229999999943, 1.2377939999999938, 18.987613999999994, 18.376966999999993, 11.365037999999991, 3.7060089999999946, 2.079201999999995, 11.20023899999999, 3.4333549999999917, 0.5105449999999934, 0.2655419999999964, 0.256346999999991, 15.646318999999991, 7.678646999999991, 6.625266999999994, 3.837336999999991, 12.682660999999989, 7.8782019999999875, 3.279129999999988, 3.3092049999999844, 9.818770999999991, 7.678026999999993, 4.3868889999999965, 1.9537919999999929, 5.079922999999994, 3.0388579999999905, 1.3831969999999885, 32.417030999999994, 19.636875999999994, 15.844632999999995, 6.414491999999996, 5.067394999999998, 1.9536909999999992, 5.875855999999992, 4.0115789999999905, 3.5390059999999934, 3.5168319999999937, 3.1008369999999914, 2.0292559999999966, 0.8819269999999904, 4.018968999999991, 1.806527999999993, 0.7882419999999968, 18.336763999999995, 13.606689999999993, 8.892180999999994, 8.886600999999992, 6.929664999999993, 6.808637999999995, 5.122401999999994, 3.734661999999993, 2.8969359999999966, 0.7948469999999901, 0.7521169999999842, 0.6943389999999852, 6.552102999999995, 3.8073929999999976, 1.8942789999999974, 1.5888559999999998, 0.22076199999999346, 1.550733000000001, 0.5271809999999988, 0.5324430000000007, 5.199384999999992, 8.001419999999989, 3.87109199999999, 2.609688999999989, 1.0378459999999876, 3.483863999999997, 11.958089999999991, 5.26084199999999, 3.495012999999986, 4.313253999999986, 1.7550799999999853, 6.03734699999999, 3.4905709999999885, 0.8052449999999851, 11.254551999999997, 9.240848999999997, 6.710006, 5.4421839999999975, 1.9658919999999966, 2.313096999999999, 0.5032389999999936, 2.2280409999999975, 1.5427490000000006, 7.4911069999999995, 6.256465999999996, 3.5807119999999983, 2.6482639999999975, 0.1352770000000021, 1.653295, 2.2447680000000005, 4.432661999999993, 2.8010029999999944, 1.708137999999991, 0.716124999999991, 1.0553819999999945, 2.2295959999999937, 0.9127479999999935, 0.2690109999999919, 8.282984999999996, 6.749634999999998, 5.725096999999998, 5.180706999999998, 4.9122720000000015, 0.8414410000000032, 0.00861799999999846, 4.215321000000003, 3.6333330000000004, 2.5588339999999974, 1.2402609999999967, 0.7864789999999999, 0.7782539999999898, 2.9535710000000037, 2.322968000000003, 1.7085790000000003, 1.0322979999999973, 0.5027199999999965, 0.4886489999999952, 0.7530179999999973, 0.9547749999999979, 1.2344580000000036], [463, 464, 460, 461, 457, 458, 454, 455, 451, 452, 450, 453, 449, 456, 448, 459, 447, 462, 444, 445, 441, 442, 440, 443, 438, 439, 437, 446, 435, 436, 432, 433, 430, 431, 429, 434, 426, 427, 425, 428, 423, 424, 421, 422, 418, 419, 416, 417, 414, 415, 411, 412, 408, 409, 407, 410, 405, 406, 404, 413, 401, 402, 398, 399, 395, 396, 394, 397, 393, 400, 392, 403, 390, 391, 387, 388, 385, 386, 382, 383, 380, 381, 377, 378, 376, 379, 375, 384, 374, 389, 373, 420, 370, 371, 368, 369, 366, 367, 363, 364, 361, 362, 358, 359, 357, 360, 356, 365, 353, 354, 350, 351, 348, 349, 346, 347, 344, 345, 341, 342, 337, 338, 334, 335, 333, 336, 330, 331, 329, 332, 327, 328, 326, 339, 325, 340, 321, 322, 319, 320, 317, 318, 315, 316, 313, 314, 311, 312, 310, 323, 309, 324, 308, 343, 307, 352, 306, 355, 305, 372, 302, 303, 300, 301, 298, 299, 295, 296, 293, 294, 290, 291, 288, 289, 287, 292, 285, 286, 284, 297, 281, 282, 279, 280, 277, 278, 275, 276, 274, 283, 273, 304, 270, 271, 268, 269, 266, 267, 263, 264, 261, 262, 260, 265, 258, 259, 255, 256, 252, 253, 251, 254, 250, 257, 247, 248, 245, 246, 243, 244, 242, 249, 239, 240, 236, 237, 235, 238, 232, 233, 231, 234, 228, 229, 226, 227, 225, 230, 223, 224, 222, 241, 219, 220, 217, 218, 215, 216, 213, 214, 211, 212, 208, 209, 207, 210, 204, 205, 203, 206, 202, 221, 199, 200, 197, 198, 195, 196, 193, 194, 190, 191, 187, 188, 186, 189, 185, 192, 184, 201, 181, 182, 179, 180, 176, 177, 173, 174, 171, 172, 169, 170, 168, 175, 167, 178, 163, 164, 162, 165, 160, 161, 158, 159, 156, 157, 153, 154, 152, 155, 151, 166, 148, 149, 145, 146, 144, 147, 141, 142, 139, 140, 138, 143, 136, 137, 132, 133, 131, 134, 129, 130, 128, 135, 126, 127, 123, 124, 121, 122, 119, 120, 117, 118, 114, 115, 111, 112, 110, 113, 109, 116, 108, 125, 107, 150, 106, 183, 105, 272, 102, 103, 98, 99, 97, 100, 96, 101, 95, 104, 92, 93, 89, 90, 87, 88, 85, 86, 84, 91, 82, 83, 79, 80, 77, 78, 75, 76, 73, 74, 70, 71, 68, 69, 65, 66, 64, 67, 62, 63, 58, 59, 57, 60, 55, 56, 54, 61, 50, 51, 49, 52, 47, 48, 45, 46, 43, 44, 41, 42, 40, 53, 39, 72, 38, 81, 36, 37, 33, 34, 31, 32, 28, 29, 26, 27, 25, 30, 22, 23, 18, 19, 17, 20, 15, 16, 14, 21, 11, 12, 10, 13, 7, 8, 5, 6, 4, 9, 3, 24, 2, 35, 1, 94], 232)
| Pesto | https://github.com/kopperud/Pesto.jl.git |
|
[
"MIT"
] | 0.1.6 | 98e186b38d34174a6d8fc45f6906d6c007c7a4ad | code | 5515 | module PestoMakieExt
import Makie, Pesto, DataFrames
import ColorSchemes
## MAKIE recipe
function Pesto.treeplot!(
ax::Makie.Axis,
data::Pesto.SSEdata;
tip_label = false,
tip_label_size = 5
)
n_edges = size(data.edges)[1]
n_tips = length(data.tiplab)
r = zeros(n_edges)
h = Pesto.coordinates(data, r)
horizontal_lines = [
Makie.Point(h[i,1], h[i,3]) => Makie.Point(h[i,2], h[i,3]) for i in 1:n_edges
]
h = sortslices(h, dims = 1, by = x -> (x[1],x[2]))
upward_lines = similar(horizontal_lines, 0)
downward_lines = similar(horizontal_lines, 0)
r_up = Float64[]
r_down = Float64[]
for i in 1:n_edges
if i % 2 > 0 ## only for internal edges
top = h[i+1,3]
bottom = h[i,3]
#bottom, top = extrema(h[i:i+1, 3])
mid = (bottom+top) / 2
## branch going up
start = Makie.Point(h[i,1], mid)
finish = Makie.Point(h[i,1], top)
append!(upward_lines, [start => finish])
## branch going down
finish = Makie.Point(h[i,1], bottom)
append!(downward_lines, [start => finish])
append!(r_up, h[i+1,4])
append!(r_down, h[i,4])
end
end
## actually do the plotting
Makie.linesegments!(ax, downward_lines, color = :black, linewidth = 1)
Makie.linesegments!(ax, upward_lines, color = :black, linewidth = 1)
Makie.linesegments!(ax, horizontal_lines, color = :black, linewidth = 1)
Makie.hidespines!(ax)
Makie.hidedecorations!(ax)
if tip_label
for i in 1:n_tips
tp = data.tiplab[i]
Makie.text!(ax, tp, fontsize = tip_label_size,
position = (maximum(h[:,2])+0.5, i),
align = (:left, :center))
end
tip_label_x_offset = 5.0
else
tip_label_x_offset = 0.0
end
top_offset = 0.05*n_tips
Makie.ylims!(ax, (0.0, n_tips + top_offset))
end
function Pesto.treeplot!(
ax::Makie.Axis,
data::Pesto.SSEdata,
rates::DataFrames.DataFrame,
rate_name = "mean_netdiv",
cmap = Makie.cgrad(:Spectral, 5, categorical = true);
tip_label = false,
tip_label_size = 5
)
n_edges = size(data.edges)[1]
n_tips = length(data.tiplab)
@assert rate_name in names(rates)
df2 = DataFrames.sort(rates, :edge)
r = df2[!,Symbol(rate_name)][2:end]
h = Pesto.coordinates(data, r)
horizontal_lines = [
Makie.Point(h[i,1], h[i,3]) => Makie.Point(h[i,2], h[i,3]) for i in 1:n_edges
]
h = sortslices(h, dims = 1, by = x -> (x[1],x[2]))
upward_lines = similar(horizontal_lines, 0)
downward_lines = similar(horizontal_lines, 0)
r_up = Float64[]
r_down = Float64[]
for i in 1:n_edges
if i % 2 > 0 ## only for internal edges
top = h[i+1,3]
bottom = h[i,3]
#bottom, top = extrema(h[i:i+1, 3])
mid = (bottom+top) / 2
## branch going up
start = Makie.Point(h[i,1], mid)
finish = Makie.Point(h[i,1], top)
append!(upward_lines, [start => finish])
## branch going down
finish = Makie.Point(h[i,1], bottom)
append!(downward_lines, [start => finish])
append!(r_up, h[i+1,4])
append!(r_down, h[i,4])
end
end
## actually do the plotting
r_extrema = extrema(r)
## do the color bar
#if !isnothing(rate_name)
#cmap = Makie.cgrad(:Spectral, 5, categorical = true)
Makie.linesegments!(ax, downward_lines, color = r_down, colormap = cmap, colorrange = r_extrema, linewidth = 1)
Makie.linesegments!(ax, upward_lines, color = r_up, colormap = cmap, colorrange = r_extrema, linewidth = 1)
Makie.linesegments!(ax, horizontal_lines, color = r, colormap = cmap, linewidth = 1)
Makie.hidespines!(ax)
Makie.hidedecorations!(ax)
if tip_label
for i in 1:n_tips
tp = data.tiplab[i]
Makie.text!(ax, tp, fontsize = tip_label_size,
position = (maximum(h[:,2])+0.5, i),
align = (:left, :center))
end
tip_label_x_offset = 5.0
else
tip_label_x_offset = 0.0
end
## 5% buffer
top_offset = 0.05*n_tips
Makie.ylims!(ax, (0.0, n_tips + top_offset))
end
function Pesto.treeplot(
data::Pesto.SSEdata,
rates::DataFrames.DataFrame,
rate_name::String = "mean_netdiv";
cmap = Makie.cgrad(:Spectral, 5, categorical = true),
tip_label = false,
tip_label_size = 5
)
if !isnothing(rate_name)
@assert rate_name in names(rates) "rate name must be equal to one of the column names in the rates data frame"
end
#cmap = Makie.cgrad(:Spectral, 5, categorical = true)
fig = Makie.Figure()
ax = Makie.Axis(fig[1,1])
Pesto.treeplot!(ax, data, rates, rate_name, cmap; tip_label = tip_label, tip_label_size = tip_label_size)
if !isnothing(rate_name)
Makie.Colorbar(fig[0,1], limits = extrema(rates[1:end-1, Symbol(rate_name)]), label = rate_name, colormap = cmap, vertical = false)
Makie.rowgap!(fig.layout, 2.0)
end
return(fig)
end
function Pesto.treeplot(
data::Pesto.SSEdata;
tip_label = false,
tip_label_size = 5
)
fig = Makie.Figure()
ax = Makie.Axis(fig[1,1])
Pesto.treeplot!(ax, data; tip_label = tip_label, tip_label_size = tip_label_size)
return(fig)
end
end | Pesto | https://github.com/kopperud/Pesto.jl.git |
|
[
"MIT"
] | 0.1.6 | 98e186b38d34174a6d8fc45f6906d6c007c7a4ad | code | 1309 | module Pesto
import OrdinaryDiffEq
import RCall
import CSV
import Distributions
import Optim
import LinearAlgebra
import DataFrames
import FastGaussQuadrature
import LoopVectorization
import ForwardDiff
## the rest
include("datatypes.jl")
include("utils.jl")
include("polytomy.jl")
include("shiftbins.jl")
include("display.jl")
## constant birth-death process
include("bd_constant/constantBDP.jl")
## birth-death-shift process
include("bd_shift/postorder.jl")
include("bd_shift/postorder_nosave.jl")
include("bd_shift/preorder.jl")
include("bd_shift/ODE.jl")
include("bd_shift/logLroot.jl")
include("bd_shift/tree_rates.jl")
include("bd_shift/backwards_forwards.jl")
include("bd_shift/birth_death_shift.jl")
include("bd_shift/extinction.jl")
include("bd_shift/nshifts.jl")
include("bd_shift/analysis.jl")
include("bd_shift/optimize.jl")
include("bd_shift/shift_probability.jl")
include("bd_shift/tip_rates.jl")
## input-output
include("io/writenewick.jl")
include("io/readnewick.jl")
include("io/readtree.jl")
## rcall
include("rcall/rconvert.jl")
## plot
include("plot/plot_tree.jl")
# Path into package
export path
path(x...; dir::String = "data") = joinpath(@__DIR__, "..", dir, x...)
## precompile
#PrecompileTools.@setup_workload begin
# bears_tree = readtree(path("bears.tre"))
#
#end
end
| Pesto | https://github.com/kopperud/Pesto.jl.git |
|
[
"MIT"
] | 0.1.6 | 98e186b38d34174a6d8fc45f6906d6c007c7a4ad | code | 733 | export SSEconstant, SSEdata, BDconstant
abstract type SSE <: Distributions.ContinuousUnivariateDistribution end
struct SSEconstant <: SSE
λ
μ
η
end
struct SSEtimevarying <: SSE
λ::Function
μ::Function
η::Function
end
struct BDconstant <: Distributions.ContinuousUnivariateDistribution
λ
μ
end
struct SSEdata
state_space
trait_data
edges
tiplab
node_depth
ρ
branch_lengths
branching_times
po
Nnode
end
struct phylo
edge::Matrix{Int64}
edge_length::Vector{Float64}
Nnode::Int64
tip_label::Vector{String}
node_depths::Vector{Float64}
branching_times::Vector{Float64}
po::Vector{Int64}
end
struct SSEresult
phy
rates
end
| Pesto | https://github.com/kopperud/Pesto.jl.git |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.