licenses
sequencelengths
1
3
version
stringclasses
677 values
tree_hash
stringlengths
40
40
path
stringclasses
1 value
type
stringclasses
2 values
size
stringlengths
2
8
text
stringlengths
25
67.1M
package_name
stringlengths
2
41
repo
stringlengths
33
86
[ "MIT" ]
0.0.8
3cd4ecef2dbe4b2fb45c37273e9709548a4051d7
docs
735
--- title: "LaTeX README.md" author: "Simon Frost" date: "25 February 2016" output: html_document --- ```{r setup, include=FALSE} knitr::opts_chunk$set(echo = TRUE) ``` ```{r} require(devtools) install_git("https://github.com/muschellij2/latexreadme") ``` ```{r} library(latexreadme) ``` ```{r} md = file.path("README_unparse.md") download.file("https://raw.githubusercontent.com/sdwfrost/PiecewiseDeterministicMarkovProcesses.jl/master/README_unparse.md",destfile = md, method = "curl") new_md = file.path("README.md") parse_latex(md, new_md, git_username = "sdwfrost", git_reponame = "PiecewiseDeterministicMarkovProcesses.jl") library(knitr) new_html = pandoc(new_md, format = "html") ```
PiecewiseDeterministicMarkovProcesses
https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git
[ "MIT" ]
0.0.8
3cd4ecef2dbe4b2fb45c37273e9709548a4051d7
docs
761
## Specify a jump with a function See `examples/pdmp_example_eva.jl` for an example. ## Rejection method stopped, recover data! If you chose an upper bound for the rejection method that is too small and triggers an interruption like ```julia ERROR: AssertionError: Error, your bound on the rates is not high enough!, [26.730756983739408, 20.0] ``` the `solve` does not return anything. However, in order to understand why your bound is too small, you would like to have a look at your trajectory up to the point where you bound failed. Don't worry, your computation is still in memory! If your call is like this: ``` sol = solve(problem, Rejection(Tsit5()) ) ``` then the trajectory is saved in the variables `problem.time`, `problem.Xc` and `problem.Xd`.
PiecewiseDeterministicMarkovProcesses
https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git
[ "MIT" ]
0.0.8
3cd4ecef2dbe4b2fb45c37273e9709548a4051d7
docs
2204
# PiecewiseDeterministicMarkovProcesses.jl PiecewiseDeterministicMarkovProcesses.jl is a Julia package that allows simulation of *Piecewise Deterministic Markov Processes* (PDMP); these encompass hybrid systems and jump processes, comprised of continuous and discrete components, as well as processes with time-varying rates. The aim of the package is to provide methods for the simulation of these processes that are "statistically exact" up to the ODE integrator. *If you find this package useful, please star the repo. If you use it in your work, please cite this code and send us an email so that we can cite your work here.* ## Definition of the Jump process We briefly recall facts about a simple class of PDMPs. They are described by a couple $(x_c, x_d)$ where $x_c$ is solution of the differential equation $$\frac{dx_c(t)}{dt} = F(x_c(t),x_d(t),p,t).$$ The second component $x_d$ is a piecewise constant array with type `Int` and `p` are some parameters. The jumps occur at rates $R(x_c(t),x_d(t),p,t)$. At each jump, $x_d$ or $x_c$ can be affected. ## Related projects - [Gillespie.jl](https://github.com/sdwfrost/Gillespie.jl): a package for simulation of pure Jump processes, *i.e.* without the continuous part $x_c$. - [DiffEqJump.jl](https://github.com/JuliaDiffEq/DiffEqJump.jl): similar to our setting with different sampling algorithm - [PDSampler.jl](https://github.com/alan-turing-institute/PDSampler.jl) - [ConstrainedPDMP.jl](https://github.com/tlienart/ConstrainedPDMP.jl) ## Installation To install this package, run the command ```julia add PiecewiseDeterministicMarkovProcesses ``` ## References - R. Veltz, [A new twist for the simulation of hybrid systems using the true jump method](https://arxiv.org/abs/1504.06873), arXiv preprint, 2015. - A. Drogoul and R. Veltz [Hopf bifurcation in a nonlocal nonlinear transport equation stemming from stochastic neural dynamics](https://aip.scitation.org/doi/abs/10.1063/1.4976510), Chaos: An Interdisciplinary Journal of Nonlinear Science, 27(2), 2017 - Aymard, Campillo, and Veltz, [Mean-Field Limit of Interacting 2D Nonlinear Stochastic Spiking Neurons](https://arxiv.org/abs/1906.10232), arXiv preprint, 2019.
PiecewiseDeterministicMarkovProcesses
https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git
[ "MIT" ]
0.0.8
3cd4ecef2dbe4b2fb45c37273e9709548a4051d7
docs
68
# Library ```@docs PDMPProblem ``` ## Solvers ```@docs solve ```
PiecewiseDeterministicMarkovProcesses
https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git
[ "MIT" ]
0.0.8
3cd4ecef2dbe4b2fb45c37273e9709548a4051d7
docs
2794
## Mathematical Specification of a PDMP Problem ### Vector field To define a PDMP Problem, you first need to give the function $F$ and the initial condition $x_{c,0}$ which define an ODE: ```math \frac{dx_c}{dt} = F(x_c(t),x_d(t),p,t) ``` where $F$ should be specified in-place as `F(dxc,xc,xd,p,t)`, and `xc0` should be an `AbstractArray` (or number) whose geometry matches the desired geometry of `xc`. Note that we are not limited to numbers or vectors for `xc0`; one is allowed to provide u₀ as arbitrary matrices / higher dimension tensors as well. ### Jumps Jumps are defined as a Jump process which changes states at some rate $R$ which is a scalar function of the type ```math R(x_c(t),x_d(t),p,t). ``` Note, that in between jumps, $x_d(t)$ is constant but $x_c(t)$ is allowed to evolve. $R$ should be specified in-place as `R(rate,xc,xd,p,t,issum::Bool)` where it mutates `rate`. Note that a boolean `issum` is provided and the behavior of `R` should be as follows - if `issum == true`, we only require `R` to return the total rate, *e.g.* `sum(rate)`. We use this formalism because sometimes you can compute the `sum` without mutating `rate`. - if `issum == false`, `R` must populate `rate` with the updated rates We then need to provide the way the jumps affect the state variable. There are two possible ways here: - either give a transition matrix `nu`: it will only affect the discrete component `xd` and leave `xc` unaffected. - give a function to implement jumps `Delta(xc, xd, parms, t, ind_reaction::Int64)` where you can mutate `xc,xd` or `parms`. The argument `ind_reaction` is the index of the reaction at which the jump occurs. See `examples/pdmp_example_eva.jl` for an example. ## Problem Type ### Constructors - `PDMPProblem(F,R,Delta,nu,xc0,xd0,p,tspan)` - `PDMPProblem(F,R,nu,xc0,xd0,p,tspan)` when ones does not want to provide the function `Delta` - `PDMPProblem(F,R,Delta,reaction_number::Int64,xc0,xd0,p,tspan)` when ones does not want to provide the transition matrix `nu`. The length `reaction_number` of the rate vector must then be provided. We also provide a wrapper to [DiffEqJump.jl](https://github.com/JuliaDiffEq/DiffEqJump.jl). This is quite similar to how a `JumpProblem` would be created. - `PDMPProblem(prob, jumps...)` where `prob` can be an `ODEProblem`. For an example, please consider `example/examplediffeqjumpwrapper.jl`. ### Fields - `F`: the function of the ODE - `R`: the function to compute the transition rates - `Delta` [Optional]: the function to effect the jumps - `nu` [Optional]: the transition matrix - `xc0`: the initial condition of the continuous part - `xd0`: the initial condition of the discrete part - `p`: the parameters to be provided to the functions `F, R, Delta` - `tspan`: The timespan for the problem.
PiecewiseDeterministicMarkovProcesses
https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git
[ "MIT" ]
0.0.8
3cd4ecef2dbe4b2fb45c37273e9709548a4051d7
docs
5954
# PDMP Solvers `solve(prob::PDMPProblem, alg; kwargs)` Solves the PDMP defined by `prob` using the algorithm `alg`. ## Simulation methods We provide several methods for the simulation: - a relatively recent trick, called **CHV**, explained in [paper-2015](http://arxiv.org/abs/1504.06873) which allows to implement the **True Jump Method** without the need to use event detection schemes for the ODE integrator. These event detections can be quite numerically unstable as explained in [paper-2015](http://arxiv.org/abs/1504.06873) and CHV provide a solution to this problem. - **rejection methods** for which the user is asked to provide a bound on the **total** reaction rate. These last methods are the most "exact" but not the fastest if the reaction rate bound is not tight. In case the flow is known **analytically**, a method is also provided. These methods require solving stiff ODEs (for CHV) in an efficient manner. [```Sundials.jl```](https://github.com/JuliaLang/Sundials.jl) and [```LSODA.jl```](https://github.com/rveltz/LSODA.jl) are great, but other solvers can also be considered (see [stiff ode solvers](http://lh3lh3.users.sourceforge.net/solveode.shtml) and also the [solvers](http://docs.juliadiffeq.org/stable/solvers/ode_solve.html) from [DifferentialEquations.jl](https://github.com/JuliaDiffEq/DifferentialEquations.jl)). Hence, the current package allows the use of all solvers in `DifferentialEquations.jl` thereby giving access to a wide range of solvers. In particular, we can test different solvers to see how precise they are. Here is an example from `examples/pdmpStiff.jl` for which an analytical expression is available which allows computation of the errors ```julia Comparison of solvers --> norm difference = 0.00019114008823351014 - solver = cvode --> norm difference = 0.00014770067837588385 - solver = lsoda --> norm difference = 0.00018404736432131585 - solver = CVODEBDF --> norm difference = 6.939603217404056e-5 - solver = CVODEAdams --> norm difference = 2.216652299580346e-5 - solver = tsit5 --> norm difference = 2.2758951345736023e-6 - solver = rodas4P-noAutoDiff --> norm difference = 2.496987313804766e-6 - solver = rodas4P-AutoDiff --> norm difference = 0.0004373003700521849 - solver = RS23 --> norm difference = 2.216652299580346e-5 - solver = AutoTsit5RS23 ``` !!! note "ODE Solvers" A lot of [care](https://discourse.julialang.org/t/help-reduce-large-gc-time/17215) have been taken to be sure that the algorithms do not allocate and hence are fast. This is based on an iterator interface of `DifferentialEquations`. If you chose `save_positions = (false, false)`, the allocations should be independent from the requested jump number. However, the iterator solution is not yet available for `LSODA` in `DifferentialEquations`. Hence, you can pass `ode = :lsoda` to access an old version of the algorithm (which allocates), or any other solver like `ode = Tsit5()` to access the **new** solver. ## How to chose an algorithm? The choice of the method `CHV` vs `Rejection` only depends on how much you know about the system. More precisely, if the total rate function does not vary much in between jumps, use the rejection method. For example, if the rate is $R(x_c(t)) = 1+0.1\cos(t)$, then $1+0.1$ will provide a tight bound to use for the rejection method and almost no (fictitious) jumps will be rejected. In all other cases, one should try the CHV method where no a priori knowledge of the rate function is needed. !!! warning "CHV Method" A strong requirement for the CHV method is that the total rate (*i.e.* `sum(rate)`) must be positive. This can be easily achieved by adding a dummy Poisson process with very low intensity (see examples). ## Common Solver Options To simulate a PDMP, one uses `solve(prob::PDMPProblem, alg; kwargs)`. The field are as follows - `alg` can be `CHV(ode)` (for the [CHV algorithm](https://arxiv.org/abs/1504.06873)), `Rejection(ode)` for the Rejection algorithm and `RejectionExact()` for the rejection algorithm in case the flow in between jumps is known analytically. In this latter case, `prob.F` is used for the specification of the Flow. The ODE solver `ode` can be any solver of [DifferentialEquations.jl](https://github.com/JuliaDiffEq/DifferentialEquations.jl) like `Tsit5()` for example or anyone of the list `[:cvode, :lsoda, :adams, :bdf, :euler]`. Indeed, the package implement an iterator interface which does not work yet with `ode = LSODA()`. In order to have access to the ODE solver `LSODA()`, one should use `ode = :lsoda`. - `n_jumps = 10`: requires the solver to only compute the first 10 jumps. - `save_position = (true, false)`: (output control) requires the solver to save the pre-jump but not the post-jump states `xc, xd`. - `verbose = true`: requires the solver to print information concerning the simulation of the PDMP - `reltol`: relative tolerance used in the ODE solver - `abstol`: absolute tolerance used in the ODE solver - `ind_save_c`: which indices of `xc` should be saved - `ind_save_d`: which indices of `xd` should be saved - `save_rate = true`: requires the solver to save the total rate. Can be useful when estimating the rate bounds in order to use the Rejection algorithm as a second try. - `X_extended = zeros(Tc, 1 + 1)`: (advanced use) options used to provide the shape of the extended array in the [CHV algorithm](https://arxiv.org/abs/1504.06873). Can be useful in order to use `StaticArrays.jl` for example. - `finalizer = finalize_dummy`: allows the user to pass a function `finalizer(rate, xc, xd, p, t)` which is called after each jump. Can be used to overload / add saving / plotting mechanisms. !!! note "Solvers for the `DiffEqJump` wrapper" We provide a basic wrapper that should work for `VariableJumps` (the other types of jumps have not been thoroughly tested). You can use `CHV` for this type of problems. The `Rejection` solver is not functional yet.
PiecewiseDeterministicMarkovProcesses
https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git
[ "MIT" ]
0.0.8
3cd4ecef2dbe4b2fb45c37273e9709548a4051d7
docs
4691
See also the [examples directory](https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl/tree/master/examples) for more involved examples. ## Basic example with CHV method A simple example of jump process is shown. We look at the following process of switching dynamics where ```math X(t) = (x_c(t), x_d(t)) \in\mathbb R\times\lbrace-1,1\rbrace. ``` In between jumps, $x_c$ evolves according to ```math \dot x_c(t) = 10x_c(t),\quad\text{ if } x_d(t)\text{ is even}. ``` ```math \dot x_c(t) = -3x_c(t)^2,\quad \text{ otherwise}. ``` We first need to load the library. ```julia using PiecewiseDeterministicMarkovProcesses const PDMP = PiecewiseDeterministicMarkovProcesses ``` We then define a function that encodes the dynamics in between jumps. We need to provide the vector field of the ODE. Hence, we define a function that, given the continuous state $x_c$ and the discrete state $x_d$ at time $t$, returns the vector field. In addition some parameters can be passed with the variable `parms`. ```julia function F!(ẋ, xc, xd, parms, t) if mod(xd[1],2)==0 ẋ[1] = 10xc[1] else ẋ[1] = -3xc[1]^2 end end ``` Let's consider a stochastic process with following transitions: | Transition | Rate | Reaction number | Jump | |---|---|---| ---| |$x_d\to x_d+[1,0]$ | $k(x_c)$ | 1 | [1] | |$x_d\to x_d+[0,1]$ | $parms$ | 2 | [1] | We implement these jumps using a 2x1 matrix `nu` of Integers, such that the jumps on each discrete component `xd` are given by `nu * xd`. Hence, we have `nu = [1 0;0 -1]`. The rates of these reactions are encoded in the following function. ```julia k(x) = 1 + x function R!(rate, xc, xd, parms, t, issum::Bool) # rate function if issum == false # in the case, one is required to mutate the vector `rate` rate[1] = k(xc[1]) rate[2] = parms[1] return 0. else # in this case, one is required to return the sum of the rates return k(xc[1]) + parms[1] end end # initial conditions for the continuous/discrete variables xc0 = [1.0] xd0 = [0, 0] # matrix of jumps for the discrete variables, analogous to chemical reactions nu = [1 0 ; 0 -1] # parameters parms = [50.] ``` We define a problem type by giving the characteristics of the process `F, R, Delta, nu`, the initial conditions, and the timespan to solve over: ```julia Random.seed!(8) # to get the same result as this simulation! problem = PDMP.PDMPProblem(F!, R!, nu, xc0, xd0, parms, (0.0, 10.0)) ``` After defining the problem, you solve it using `solve`. ```julia sol = PDMP.solve(problem, CHV(CVODE_BDF())) ``` In this case, we chose to sample `pb` with the [CHV algorithm](https://arxiv.org/abs/1504.06873) where the flow in between jumps is integrated with the solver `CVODE_BDF()` from [DifferentialEquations.jl](https://github.com/JuliaDiffEq/DifferentialEquations.jl). We can then plot the solution as follows: ``` # plotting using Plots Plots.plot(sol.time,sol.xc[1,:],label="xc") ``` This produces the graph: ![TCP](example1.png) ## Basic example with the rejection method The previous method is useful when the total rate function varies a lot. In the case where the total rate is mostly constant in between jumps, the **rejection method** is more appropriate. The **rejection method** assumes some a priori knowledge of the process one wants to simulate. In particular, the user must be able to provide a bound on the total rate. More precisely, the user must provide a constant bound in between the jumps. To use this method, `R_tcp!` must return `sum(rate), bound_rejection`. Note that this means that in between jumps, one has: `sum(rate)(t) <= bound_rejection ` ```julia function R2!(rate, xc, xd, parms, t, issum::Bool) # rate function bound_rejection = 1. + parms[1] + 15 # bound on the total rate if issum == false # in the case, one is required to mutate the vector `rate` rate[1] = k(xc[1]) rate[2] = parms[1] return 0., bound_rejection else # in this case, one is required to return the sum of the rates return k(xc[1]) + parms[1], bound_rejection end end ``` We can now simulate this process as follows ```julia Random.seed!(8) # to get the same result as this simulation! problem = PDMP.PDMPProblem(F!, R2!, nu, xc0, xd0, parms, (0.0, 1.0)) sol = PDMP.solve(problem, Rejection(CVODE_BDF())) ``` In this case, we chose to sample `pb` with the Rejection algorithm where the flow in between jumps is integrated with the solver `CVODE_BDF()` from [DifferentialEquations.jl](https://github.com/JuliaDiffEq/DifferentialEquations.jl). We can then plot the solution as follows: ``` # plotting using Plots Plots.plot(sol.time,sol.xc[1,:],label="xc") ``` This produces the graph: ![TCP](example2.png)
PiecewiseDeterministicMarkovProcesses
https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git
[ "MIT" ]
0.2.1
c7234ee9515cd1700bdad471f623249abaa27a3e
code
796
using DerivableFunctions using Documenter DocMeta.setdocmeta!(DerivableFunctions, :DocTestSetup, :(using DerivableFunctions); recursive=true) makedocs(; modules=[DerivableFunctions], authors="Rafael Arutjunjan", repo="https://github.com/RafaelArutjunjan/DerivableFunctions.jl/blob/{commit}{path}#{line}", sitename="DerivableFunctions.jl", format=Documenter.HTML(; prettyurls=get(ENV, "CI", "false") == "true", canonical="https://RafaelArutjunjan.github.io/DerivableFunctions.jl", assets=String[], ), pages=[ "Home" => "index.md", "Differentiation Operators" => "Operators.md", "DFunctions" => "DFunctions.md" ], ) deploydocs(; repo="github.com/RafaelArutjunjan/DerivableFunctions.jl", devbranch="main", )
DerivableFunctions
https://github.com/RafaelArutjunjan/DerivableFunctions.jl.git
[ "MIT" ]
0.2.1
c7234ee9515cd1700bdad471f623249abaa27a3e
code
1160
module DerivableFunctions using Reexport using ReverseDiff, Zygote, FiniteDiff, FiniteDifferences @reexport using DerivableFunctionsBase # import such that they are available downstream as if defined here import DerivableFunctionsBase: MaximalNumberOfArguments, KillAfter, Builder import DerivableFunctionsBase: _GetArgLength, _GetArgLengthOutOfPlace, _GetArgLengthInPlace import DerivableFunctionsBase: GetSymbolicDerivative, SymbolicPassthrough import DerivableFunctionsBase: _GetDeriv, _GetGrad, _GetJac, _GetHess, _GetMatrixJac, _GetDoubleJac import DerivableFunctionsBase: _GetDerivPass, _GetGradPass, _GetJacPass, _GetHessPass, _GetDoubleJacPass, _GetMatrixJacPass import DerivableFunctionsBase: _GetGrad!, _GetJac!, _GetHess!, _GetMatrixJac! import DerivableFunctionsBase: _GetGradPass!, _GetJacPass!, _GetHessPass!, _GetMatrixJacPass! import DerivableFunctionsBase: suff suff(x::ReverseDiff.TrackedReal) = typeof(x) ## Add new backends to the output of diff_backends() import DerivableFunctionsBase: AddedBackEnds AddedBackEnds(::Val{true}) = [:ReverseDiff, :Zygote, :FiniteDifferences, :FiniteDiff] include("DifferentiationOperators.jl") end
DerivableFunctions
https://github.com/RafaelArutjunjan/DerivableFunctions.jl.git
[ "MIT" ]
0.2.1
c7234ee9515cd1700bdad471f623249abaa27a3e
code
5030
## out-of-place operator backends _GetDeriv(ADmode::Val{:ReverseDiff}; kwargs...) = throw("GetDeriv() not available for ReverseDiff.jl") _GetGrad(ADmode::Val{:ReverseDiff}; kwargs...) = ReverseDiff.gradient _GetJac(ADmode::Val{:ReverseDiff}; kwargs...) = ReverseDiff.jacobian _GetHess(ADmode::Val{:ReverseDiff}; kwargs...) = ReverseDiff.hessian _GetDeriv(ADmode::Val{:Zygote}; kwargs...) = throw("GetDeriv() not available for Zygote.jl") _GetGrad(ADmode::Val{:Zygote}; order::Int=-1, kwargs...) = (Func::Function,p;Kwargs...) -> Zygote.gradient(Func, p; kwargs...)[1] _GetJac(ADmode::Val{:Zygote}; order::Int=-1, kwargs...) = (Func::Function,p;Kwargs...) -> Zygote.jacobian(Func, p; kwargs...)[1] _GetHess(ADmode::Val{:Zygote}; order::Int=-1, kwargs...) = (Func::Function,p;Kwargs...) -> Zygote.hessian(Func, p; kwargs...) _GetDoubleJac(ADmode::Val{:Zygote}; kwargs...) = throw("GetDoubleJac() not available for Zygote.jl") # Zygote does not support mutating arrays _GetDeriv(ADmode::Val{:FiniteDiff}; kwargs...) = FiniteDiff.finite_difference_derivative _GetGrad(ADmode::Val{:FiniteDiff}; kwargs...) = FiniteDiff.finite_difference_gradient _GetJac(ADmode::Val{:FiniteDiff}; kwargs...) = FiniteDiff.finite_difference_jacobian _GetHess(ADmode::Val{:FiniteDiff}; kwargs...) = FiniteDiff.finite_difference_hessian _GetDoubleJac(ADmode::Val{:FiniteDiff}; kwargs...) = throw("GetDoubleJac() not available for FiniteDiff.jl") _GetDeriv(ADmode::Val{:FiniteDifferences}; kwargs...) = throw("GetDeriv() not available for FiniteDifferences.jl") _GetGrad(ADmode::Val{:FiniteDifferences}; order::Int=3, kwargs...) = (Func::Function,p;Kwargs...) -> FiniteDifferences.grad(central_fdm(order,1), Func, p; kwargs...)[1] _GetJac(ADmode::Val{:FiniteDifferences}; order::Int=3, kwargs...) = (Func::Function,p;Kwargs...) -> FiniteDifferences.jacobian(central_fdm(order,1), Func, p; kwargs...)[1] _GetHess(ADmode::Val{:FiniteDifferences}; order::Int=5, kwargs...) = (Func::Function,p;Kwargs...) -> FiniteDifferences.jacobian(central_fdm(order,1), z->FiniteDifferences.grad(central_fdm(order,1), Func, z)[1], p)[1] ## in-place operator backends _GetGrad!(ADmode::Val{:ReverseDiff}; kwargs...) = ReverseDiff.gradient! _GetJac!(ADmode::Val{:ReverseDiff}; kwargs...) = ReverseDiff.jacobian! _GetHess!(ADmode::Val{:ReverseDiff}; kwargs...) = ReverseDiff.hessian! _GetMatrixJac!(ADmode::Val{:ReverseDiff}; kwargs...) = _GetJac!(ADmode; kwargs...) # DELIBERATE!!!! _GetJac!() recognizes output format from given Array #_GetDeriv!(ADmode::Val{:FiniteDiff}; kwargs...) = FiniteDiff.finite_difference_derivative! _GetGrad!(ADmode::Val{:FiniteDiff}; kwargs...) = FiniteDiff.finite_difference_gradient! function _GetJac!(ADmode::Val{:FiniteDiff}; kwargs...) function FiniteDiff__finite_difference_jacobian!(Y::AbstractArray{<:Number}, F::Function, X, args...; kwargs...) # in-place FiniteDiff operators assume that function itself is also in-place if MaximalNumberOfArguments(F) > 1 FiniteDiff.finite_difference_jacobian!(Y, F, X, args...; kwargs...) else # Use fake method (Y[:] .= vec(_GetJac(ADmode; kwargs...)(F, X, args...))) # FiniteDiff.finite_difference_jacobian!(Y, (Res,x)->copyto!(Res,F(x)), args...; kwargs...) end end end _GetHess!(ADmode::Val{:FiniteDiff}; kwargs...) = FiniteDiff.finite_difference_hessian! _GetMatrixJac!(ADmode::Val{:FiniteDiff}; kwargs...) = _GetJac!(ADmode; kwargs...) # Fake in-place methods function _GetGrad!(ADmode::Union{<:Val{:Zygote},<:Val{:FiniteDifferences}}; verbose::Bool=false, kwargs...) verbose && (@warn "Using fake in-place differentiation operator GetGrad!() for ADmode=$ADmode because backend does not supply appropriate method.") FakeInPlaceGrad!(Y::AbstractVector,F::Function,X::AbstractVector) = copyto!(Y, _GetGrad(ADmode; kwargs...)(F, X)) end function _GetJac!(ADmode::Union{Val{:Zygote},<:Val{:FiniteDifferences}}; verbose::Bool=false, kwargs...) verbose && (@warn "Using fake in-place differentiation operator GetJac!() for ADmode=$ADmode because backend does not supply appropriate method.") FakeInPlaceJac!(Y::AbstractMatrix,F::Function,X::AbstractVector) = copyto!(Y, _GetJac(ADmode; kwargs...)(F, X)) end function _GetHess!(ADmode::Union{Val{:Zygote},<:Val{:FiniteDifferences}}; verbose::Bool=false, kwargs...) verbose && (@warn "Using fake in-place differentiation operator GetHess!() for ADmode=$ADmode because backend does not supply appropriate method.") FakeInPlaceHess!(Y::AbstractMatrix,F::Function,X::AbstractVector) = copyto!(Y, _GetHess(ADmode; kwargs...)(F, X)) end function _GetMatrixJac!(ADmode::Union{Val{:Zygote},<:Val{:FiniteDifferences}}; verbose::Bool=false, kwargs...) verbose && (@warn "Using fake in-place differentiation operator GetMatrixJac!() for ADmode=$ADmode because backend does not supply appropriate method.") FakeInPlaceMatrixJac!(Y::AbstractArray,F::Function,X::AbstractVector) = (Y[:] .= vec(_GetJac(ADmode; kwargs...)(F, X))) end
DerivableFunctions
https://github.com/RafaelArutjunjan/DerivableFunctions.jl.git
[ "MIT" ]
0.2.1
c7234ee9515cd1700bdad471f623249abaa27a3e
code
3222
using SafeTestsets @safetestset "Bare Differentiation Operator Backends (out-of-place)" begin using DerivableFunctions, Test, ForwardDiff Metric3(x) = [sinh(x[3]) exp(x[1])*sin(x[2]) 0; 0 cosh(x[2]) cos(x[2])*x[3]*x[2]; exp(x[2]) cos(x[3])*x[1]*x[2] 1.] X = ForwardDiff.gradient(x->x[1]^2 + exp(x[2]), [5,10.]) Y = ForwardDiff.jacobian(x->[x[1]^2, exp(x[2])], [5,10.]) Z = ForwardDiff.hessian(x->x[1]^2 + exp(x[2]) + x[1]*x[2], [5,10.]) Mat = reshape(ForwardDiff.jacobian(vec∘Metric3, [5,10,15.]), 3, 3, 3) Djac = reshape(ForwardDiff.jacobian(p->vec(ForwardDiff.jacobian(x->[exp(x[1])*sin(x[2]), cosh(x[2])*x[1]*x[2]],p)), [5,10.]), 2,2,2) function MyTest(ADmode::Symbol; atol::Real=2e-5, kwargs...) Grad, Jac, Hess = GetGrad(ADmode; kwargs...), GetJac(ADmode; kwargs...), GetHess(ADmode; kwargs...) MatrixJac = GetMatrixJac(ADmode; order=8, kwargs...) @test isapprox(Grad(x->x[1]^2 + exp(x[2]), [5,10.]), X; atol=atol) @test isapprox(Jac(x->[x[1]^2, exp(x[2])], [5,10.]), Y; atol=atol) @test isapprox(Hess(x->x[1]^2 + exp(x[2]) + x[1]*x[2], [5,10.]), Z; atol=atol) @test maximum(abs.(MatrixJac(Metric3, [5,10,15.]) - Mat)) < atol end for ADmode ∈ [:Zygote, :ReverseDiff, :FiniteDifferences] MyTest(ADmode) end MyTest(:FiniteDiff; atol=0.2) function TestDoubleJac(ADmode::Symbol; atol::Real=2e-5, kwargs...) DoubleJac = GetDoubleJac(ADmode; order=8, kwargs...) maximum(abs.(DoubleJac(x->[exp(x[1])*sin(x[2]), cosh(x[2])*x[1]*x[2]], [5,10.]) - Djac)) < atol end for ADmode ∈ [:ReverseDiff, :FiniteDifferences] @test TestDoubleJac(ADmode) end # Zygote does not support mutating arrays @test_broken TestDoubleJac(:Zygote) @test_broken TestDoublejac(:FiniteDiff) end @safetestset "Bare Differentiation Operator Backends (in-place)" begin using DerivableFunctions, Test, ForwardDiff Metric3(x) = [sinh(x[3]) exp(x[1])*sin(x[2]) 0; 0 cosh(x[2]) cos(x[2])*x[3]*x[2]; exp(x[2]) cos(x[3])*x[1]*x[2] 1.] X = ForwardDiff.gradient(x->x[1]^2 + exp(x[2]), [5,10.]) Y = ForwardDiff.jacobian(x->[x[1]^2, exp(x[2])], [5,10.]) Z = ForwardDiff.hessian(x->x[1]^2 + exp(x[2]) + x[1]*x[2], [5,10.]) Mat = reshape(ForwardDiff.jacobian(vec∘Metric3, [5,10,15.]), 3, 3, 3) function MyInplaceTest(ADmode::Symbol; atol::Real=2e-5, kwargs...) Grad! = GetGrad!(ADmode, x->x[1]^2 + exp(x[2]); kwargs...) Jac! = GetJac!(ADmode, x->[x[1]^2, exp(x[2])]; kwargs...) Hess! = GetHess!(ADmode, x->x[1]^2 + exp(x[2]) + x[1]*x[2]; kwargs...) MatrixJac! = GetMatrixJac!(ADmode, Metric3; order=8, kwargs...) Xres = similar(X); Yres = similar(Y); Zres = similar(Z); Matres = similar(Mat) Grad!(Xres, [5,10.]); @test isapprox(Xres, X; atol=atol) Jac!(Yres, [5,10.]); @test isapprox(Yres, Y; atol=atol) Hess!(Zres, [5,10.]); @test isapprox(Zres, Z; atol=atol) MatrixJac!(Matres, [5,10,15.]); @test maximum(abs.(Matres - Mat)) < atol end for ADmode ∈ [:Zygote, :ReverseDiff, :FiniteDifferences] MyInplaceTest(ADmode) end MyInplaceTest(:FiniteDiff; atol=0.2) end
DerivableFunctions
https://github.com/RafaelArutjunjan/DerivableFunctions.jl.git
[ "MIT" ]
0.2.1
c7234ee9515cd1700bdad471f623249abaa27a3e
docs
2908
# DerivableFunctions.jl *A Julia package for backend-agnostic differentiation combined with symbolic passthrough.* | **Documentation** | **Build Status** | |:-----------------:|:----------------:| | [![Stable](https://img.shields.io/badge/docs-stable-blue.svg)](https://RafaelArutjunjan.github.io/DerivableFunctions.jl/stable) [![Dev](https://img.shields.io/badge/docs-dev-blue.svg)](https://RafaelArutjunjan.github.io/DerivableFunctions.jl/dev) | [![Build Status](https://ci.appveyor.com/api/projects/status/github/RafaelArutjunjan/DerivableFunctions.jl?svg=true)](https://ci.appveyor.com/project/RafaelArutjunjan/DerivableFunctions-jl) [![codecov](https://codecov.io/gh/RafaelArutjunjan/DerivableFunctions.jl/branch/main/graph/badge.svg?token=boWzh2IUO9)](https://codecov.io/gh/RafaelArutjunjan/DerivableFunctions.jl) | **Note: Most of the core functionality has been outsourced to** [**DerivableFunctionsBase.jl**](https://github.com/RafaelArutjunjan/DerivableFunctionsBase.jl) **to decrease load times whenever only a single backend is required.** This package provides a front-end for differentiation operations in Julia that allows for code written by the user to be agnostic with respect to many of the available automatic and symbolic differentiation tools available in Julia. Moreover, the differentiation operators provided by **DerivableFunctions.jl** are overloaded to allow for passthrough of symbolic variables. That is, if symbolic types such as `Symbolics.Num` are detected, the differentiation operators automatically switch to symbolic differentiation. In addition to these operators, **DerivableFunctions.jl** also provides the `DFunction` type, which stores methods for the first and second derivatives to allow for more convenient and potentially more performant computations if the derivatives are known. For detailed examples, please see the [**documentation**](https://RafaelArutjunjan.github.io/DerivableFunctions.jl/dev). ```julia julia> D = DFunction(x->[exp(x[1]^2 - x[2]), log(sin(x[2]))]) (::DerivableFunction) (generic function with 1 method) julia> EvalF(D,[1,2]) 2-element Vector{Float64}: 0.36787944117144233 -0.09508303609516061 julia> EvaldF(D,[1,2]) 2×2 Matrix{Float64}: 0.735759 -0.367879 0.0 -0.457658 julia> EvalddF(D,[1,2]) 2×2×2 Array{Float64, 3}: [:, :, 1] = 2.20728 -0.735759 0.0 0.0 [:, :, 2] = -0.735759 0.367879 0.0 -1.20945 julia> using Symbolics; @variables z[1:2] 1-element Vector{Symbolics.Arr{Num, 1}}: z[1:2] julia> EvalddF(D, z) 2×2×2 Array{Num, 3}: [:, :, 1] = 2exp(z[1]^2 - z[2]) + 4(z[1]^2)*exp(z[1]^2 - z[2]) -2exp(z[1]^2 - z[2])*z[1] 0 0 [:, :, 2] = -2exp(z[1]^2 - z[2])*z[1] exp(z[1]^2 - z[2]) 0 (-(cos(z[2])^2)) / (sin(z[2])^2) + (-sin(z[2])) / sin(z[2]) ```
DerivableFunctions
https://github.com/RafaelArutjunjan/DerivableFunctions.jl.git
[ "MIT" ]
0.2.1
c7234ee9515cd1700bdad471f623249abaa27a3e
docs
1024
### DFunctions The `DFunction` type stores the first and second derivatives of a given input function which is not only convenient but can enhance performance significantly. At this point, the `DFunction` type requires the given function to be out-of-place, however, this will likely be extended to in-place functions in the future. Once constructed, a `DFunction` object `D` can be evaluated at `x` via the syntax `EvalF(D,x)`, `EvaldF(D,x)` and `EvalddF(D,x)`. In order to construct the appropriate derivatives, the input and output dimensions of a given function `F` are assessed and the appropriate operators (`GetGrad(), GetJac()` and so on) called. By default, `DFunction()` attempts to construct the derivatives symbolically, however, this can be specified via the `ADmode` keyword: ```@example 2 using DerivableFunctions D = DFunction(x->[x^7 - sin(x), tanh(x)]; ADmode=Val(:ReverseDiff)) EvalF(D, 5.), EvaldF(D, 5.), EvalddF(D, 5.) using Symbolics; @variables y EvalF(D, y), EvaldF(D, y), EvalddF(D, y) ```
DerivableFunctions
https://github.com/RafaelArutjunjan/DerivableFunctions.jl.git
[ "MIT" ]
0.2.1
c7234ee9515cd1700bdad471f623249abaa27a3e
docs
5053
## Differentiation Operators [**DerivableFunctions.jl**](https://github.com/RafaelArutjunjan/DerivableFunctions.jl) aims to provide a backend-agnostic interface for differentiation and currently allows the user to seamlessly switch between [**ForwardDiff.jl**](https://github.com/JuliaDiff/ForwardDiff.jl), [**ReverseDiff.jl**](https://github.com/JuliaDiff/ReverseDiff.jl), [**Zygote.jl**](https://github.com/FluxML/Zygote.jl), [**FiniteDifferences.jl**](https://github.com/JuliaDiff/FiniteDifferences.jl) and [**Symbolics.jl**](https://github.com/JuliaSymbolics/Symbolics.jl). The desired backend is optionally specified in the first argument (default is ForwardDiff) via a `Symbol` or `Val`. The available backends can be listed via `diff_backends()`. Next, the function that is to be differentiated is provided. We will illustrate this syntax using the `GetMatrixJac` method: ```@example 1 using DerivableFunctions Metric(x) = [exp(x[1]^3) sin(cosh(x[2])); log(sqrt(x[1])) x[1]^2*x[2]^5] Jac = GetMatrixJac(Val(:ForwardDiff), Metric) Jac([1,2.]) ``` Moreover, these operators are overloaded to allow for passthrough of symbolic variables. ```@example 1 using Symbolics @variables z[1:2] J = Jac(z) J[:,:,1], J[:,:,2] ``` Since the function `Metric` in this example can be represented in terms of analytic expressions, it is also possible to construct its derivative symbolically: ```@example 1 SymJac = GetMatrixJac(Val(:Symbolic), Metric) SymJac([1,2.]) ``` Currently, [**DerivableFunctions.jl**](https://github.com/RafaelArutjunjan/DerivableFunctions.jl) exports `GetDeriv(), GetGrad(), GetHess(), GetJac(), GetDoubleJac()` and `GetMatrixJac()`. Furthermore, these operators also have in-place versions: ```@example 1 Jac! = GetMatrixJac!(Val(:ForwardDiff), Metric) Y = Array{Float64}(undef, 2, 2, 2) Jac!(Y, [1,2.]) ``` Just like the out-of-place versions, the in-place operators are overloaded for symbolic passthrough: ```@example 1 Ynum = Array{Num}(undef, 2, 2, 2) Jac!(Ynum, z) Ynum[:,:,1], Ynum[:,:,2] ``` The exported in-place operators include `GetGrad!(), GetHess!(), GetJac!()` and `GetMatrixJac!()`. ## Differentiation Backend-Agnostic Programming Essentially, the abstraction layer provided by **DerivableFunctions.jl** only requires the user to specify the "semantic" meaning of a given differentiation operation while allowing for flexible post hoc choice of backend as well as enabling symbolic pass through for the resulting computation. For example, when calculating differential-geometric quantities such as the Riemann or Ricci tensors, which depend on complicated combinations of up to second derivatives of the components of the metric tensor, a single implementation simultaneously provides a performant numerical implementation as well as allowing for analytical insight for simple examples. ```julia using DerivableFunctions, Tullio, LinearAlgebra MetricPartials(Metric::Function, θ::AbstractVector; ADmode::Val=Val(:ForwardDiff)) = GetMatrixJac(ADmode, Metric)(θ) function ChristoffelSymbol(Metric::Function, θ::AbstractVector; ADmode::Val=Val(:ForwardDiff)) PDV = MetricPartials(Metric, θ; ADmode); InvMetric = inv(Metric(θ)) @tullio Γ[a,i,j] := ((1/2) * InvMetric)[a,m] * (PDV[j,m,i] + PDV[m,i,j] - PDV[i,j,m]) end function ChristoffelPartials(Metric::Function, θ::AbstractVector; ADmode::Val=Val(:ForwardDiff)) GetMatrixJac(ADmode, x->ChristoffelSymbol(Metric, x; ADmode))(θ) end function Riemann(Metric::Function, θ::AbstractVector; ADmode::Val=Val(:ForwardDiff)) Γ = ChristoffelSymbol(Metric, θ; ADmode) ∂Γ = ChristoffelPartials(Metric, θ; ADmode) @tullio Riem[i,j,k,l] := ∂Γ[i,j,l,k] - ∂Γ[i,j,k,l] @tullio Riem[i,j,k,l] += Γ[i,a,k]*Γ[a,j,l] - Γ[i,a,l]*Γ[a,j,k] end function Ricci(Metric::Function, θ::AbstractVector; ADmode::Val=Val(:ForwardDiff)) Riem = Riemann(Metric, θ; ADmode) @tullio Ric[a,b] := Riem[s,a,s,b] end function RicciScalar(Metric::Function, θ::AbstractVector; ADmode::Val=Val(:ForwardDiff)) InvMetric = inv(Metric(θ)) tr(transpose(Ricci(Metric, θ; ADmode)) * InvMetric) end ``` Clearly, this simplified implementation features some redundant evaluations of the inverse metric and could be made more efficient. Nevertheless, it nicely illustrates how succinctly complex real-world examples can be formulated. Given the metric tensor induced by the canonical embedding of ``S^2`` into ``\\mathbb{R}^3`` with spherical coordinates, it can be shown that the Ricci scalar assumes a constant value of ``R=2`` everywhere on ``S^2``. ```julia S2metric((θ,ϕ)) = [1.0 0; 0 sin(θ)^2] 2 ≈ RicciScalar(S2metric, rand(2); ADmode=Val(:ForwardDiff)) ≈ RicciScalar(S2metric, rand(2); ADmode=Val(:ReverseDiff)) ``` (In this particular instance, due to a term in the `ChristoffelSymbol` where the `sin` in the numerator does not cancel with the identical term in the denominator, the symbolic computation does not recognize the fact that the final expression can be simplified to yield exactly ``R=2``.) ```julia using Symbolics; @variables p[1:2] RicciScalar(S2metric, p) ```
DerivableFunctions
https://github.com/RafaelArutjunjan/DerivableFunctions.jl.git
[ "MIT" ]
0.2.1
c7234ee9515cd1700bdad471f623249abaa27a3e
docs
234
```@meta CurrentModule = DerivableFunctions ``` # DerivableFunctions Documentation for [DerivableFunctions](https://github.com/RafaelArutjunjan/DerivableFunctions.jl). ```@index ``` ```@autodocs Modules = [DerivableFunctions] ```
DerivableFunctions
https://github.com/RafaelArutjunjan/DerivableFunctions.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
1702
using ImplicitGlobalGrid using Documenter using DocExtensions using DocExtensions.DocumenterExtensions const DOCSRC = joinpath(@__DIR__, "src") const DOCASSETS = joinpath(DOCSRC, "assets") const EXAMPLEROOT = joinpath(@__DIR__, "..", "examples") DocMeta.setdocmeta!(ImplicitGlobalGrid, :DocTestSetup, :(using ImplicitGlobalGrid); recursive=true) @info "Copy examples folder to assets..." mkpath(DOCASSETS) cp(EXAMPLEROOT, joinpath(DOCASSETS, "examples"); force=true) @info "Preprocessing .MD-files..." include("reflinks.jl") MarkdownExtensions.expand_reflinks(reflinks; rootdir=DOCSRC) @info "Building documentation website using Documenter.jl..." makedocs(; modules = [ImplicitGlobalGrid], authors = "Samuel Omlin, Ludovic Räss, Ivan Utkin", repo = "https://github.com/eth-cscs/ImplicitGlobalGrid.jl/blob/{commit}{path}#{line}", sitename = "ImplicitGlobalGrid.jl", format = Documenter.HTML(; prettyurls = true, canonical = "https://omlins.github.io/ImplicitGlobalGrid.jl", collapselevel = 1, sidebar_sitename = true, edit_link = "master", ), pages = [ "Introduction" => "index.md", "Usage" => "usage.md", "Examples" => [hide("..." => "examples.md"), "examples/diffusion3D_multigpu_CuArrays_novis.md", "examples/diffusion3D_multigpu_CuArrays_onlyvis.md", ], "API reference" => "api.md", ], ) @info "Deploying docs..." deploydocs(; repo = "github.com/eth-cscs/ImplicitGlobalGrid.jl", push_preview = true, devbranch = "master", )
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
1340
reflinks = Dict( "[AMDGPU.jl]" => "https://github.com/JuliaGPU/AMDGPU.jl", "[CUDA.jl]" => "https://github.com/JuliaGPU/CUDA.jl", "[GTC19]" => "https://on-demand.gputechconf.com/gtc/2019/video/_/S9368/", "[IJulia]" => "https://github.com/JuliaLang/IJulia.jl", "[ImplicitGlobalGrid.jl]" => "https://github.com/eth-cscs/ImplicitGlobalGrid.jl", "[JuliaCon19]" => "https://pretalx.com/juliacon2019/talk/LGHLC3/", "[JuliaCon20a]" => "https://www.youtube.com/watch?v=vPsfZUqI4_0", "[Julia CUDA paper 1]" => "https://doi.org/10.1109/TPDS.2018.2872064", "[Julia CUDA paper 2]" => "https://doi.org/10.1016/j.advengsoft.2019.02.002", "[Julia Plots documentation]" => "http://docs.juliaplots.org/latest/backends/", "[Julia Plots package]" => "https://github.com/JuliaPlots/Plots.jl", "[Julia package manager]" => "https://docs.julialang.org/en/v1/stdlib/Pkg/", "[Julia REPL]" => "https://docs.julialang.org/en/v1/stdlib/REPL/", "[MPI.jl]" => "https://github.com/JuliaParallel/MPI.jl", "[PASC19]" => "https://pasc19.pasc-conference.org/program/schedule/index.html%3Fpost_type=page&p=10&id=msa218&sess=sess144.html", )
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
4586
using ImplicitGlobalGrid, Plots @views d_xa(A) = A[2:end , : , : ] .- A[1:end-1, : , : ]; @views d_xi(A) = A[2:end ,2:end-1,2:end-1] .- A[1:end-1,2:end-1,2:end-1]; @views d_ya(A) = A[ : ,2:end , : ] .- A[ : ,1:end-1, : ]; @views d_yi(A) = A[2:end-1,2:end ,2:end-1] .- A[2:end-1,1:end-1,2:end-1]; @views d_za(A) = A[ : , : ,2:end ] .- A[ : , : ,1:end-1]; @views d_zi(A) = A[2:end-1,2:end-1,2:end ] .- A[2:end-1,2:end-1,1:end-1]; @views inn(A) = A[2:end-1,2:end-1,2:end-1] @views function diffusion3D() # Physics lam = 1.0; # Thermal conductivity cp_min = 1.0; # Minimal heat capacity lx, ly, lz = 10.0, 10.0, 10.0; # Length of computational domain in dimension x, y and z # Numerics nx, ny, nz = 128, 128, 128; # Number of gridpoints in dimensions x, y and z nt = 20000; # Number of time steps me, dims = init_global_grid(nx, ny, nz); # Initialize the implicit global grid dx = lx/(nx_g()-1); # Space step in dimension x dy = ly/(ny_g()-1); # ... in dimension y dz = lz/(nz_g()-1); # ... in dimension z # Array initializations T = zeros(nx, ny, nz ); Cp = zeros(nx, ny, nz ); dTedt = zeros(nx-2, ny-2, nz-2); qx = zeros(nx-1, ny-2, nz-2); qy = zeros(nx-2, ny-1, nz-2); qz = zeros(nx-2, ny-2, nz-1); # Initial conditions (heat capacity and temperature with two Gaussian anomalies each) Cp .= cp_min .+ [5*exp(-((x_g(ix,dx,Cp)-lx/1.5))^2-((y_g(iy,dy,Cp)-ly/2))^2-((z_g(iz,dz,Cp)-lz/1.5))^2) + 5*exp(-((x_g(ix,dx,Cp)-lx/3.0))^2-((y_g(iy,dy,Cp)-ly/2))^2-((z_g(iz,dz,Cp)-lz/1.5))^2) for ix=1:size(T,1), iy=1:size(T,2), iz=1:size(T,3)] T .= [100*exp(-((x_g(ix,dx,T)-lx/2)/2)^2-((y_g(iy,dy,T)-ly/2)/2)^2-((z_g(iz,dz,T)-lz/3.0)/2)^2) + 50*exp(-((x_g(ix,dx,T)-lx/2)/2)^2-((y_g(iy,dy,T)-ly/2)/2)^2-((z_g(iz,dz,T)-lz/1.5)/2)^2) for ix=1:size(T,1), iy=1:size(T,2), iz=1:size(T,3)] # Preparation of visualisation gr() ENV["GKSwstype"]="nul" anim = Animation(); nx_v = (nx-2)*dims[1]; ny_v = (ny-2)*dims[2]; nz_v = (nz-2)*dims[3]; T_v = zeros(nx_v, ny_v, nz_v); T_nohalo = zeros(nx-2, ny-2, nz-2); # Time loop dt = min(dx*dx,dy*dy,dz*dz)*cp_min/lam/8.1; # Time step for the 3D Heat diffusion for it = 1:nt if mod(it, 500) == 1 # Visualize only every 500th time step T_nohalo .= T[2:end-1,2:end-1,2:end-1]; # Copy data removing the halo. gather!(T_nohalo, T_v) # Gather data on process 0 (could be interpolated/sampled first) if (me==0) heatmap(transpose(T_v[:,ny_v÷2,:]), aspect_ratio=1); frame(anim); end # Visualize it on process 0. end qx .= -lam.*d_xi(T)./dx; # Fourier's law of heat conduction: q_x = -λ δT/δx qy .= -lam.*d_yi(T)./dy; # ... q_y = -λ δT/δy qz .= -lam.*d_zi(T)./dz; # ... q_z = -λ δT/δz dTedt .= 1.0./inn(Cp).*(-d_xa(qx)./dx .- d_ya(qy)./dy .- d_za(qz)./dz); # Conservation of energy: δT/δt = 1/cₚ (-δq_x/δx - δq_y/dy - δq_z/dz) T[2:end-1,2:end-1,2:end-1] .= inn(T) .+ dt.*dTedt; # Update of temperature T_new = T_old + δT/δt update_halo!(T); # Update the halo of T end # Postprocessing if (me==0) gif(anim, "diffusion3D.gif", fps = 15) end # Create a gif movie on process 0. if (me==0) mp4(anim, "diffusion3D.mp4", fps = 15) end # Create a mp4 movie on process 0. finalize_global_grid(); # Finalize the implicit global grid end diffusion3D()
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
3495
using ImplicitGlobalGrid @views d_xa(A) = A[2:end , : , : ] .- A[1:end-1, : , : ]; @views d_xi(A) = A[2:end ,2:end-1,2:end-1] .- A[1:end-1,2:end-1,2:end-1]; @views d_ya(A) = A[ : ,2:end , : ] .- A[ : ,1:end-1, : ]; @views d_yi(A) = A[2:end-1,2:end ,2:end-1] .- A[2:end-1,1:end-1,2:end-1]; @views d_za(A) = A[ : , : ,2:end ] .- A[ : , : ,1:end-1]; @views d_zi(A) = A[2:end-1,2:end-1,2:end ] .- A[2:end-1,2:end-1,1:end-1]; @views inn(A) = A[2:end-1,2:end-1,2:end-1] @views function diffusion3D() # Physics lam = 1.0; # Thermal conductivity cp_min = 1.0; # Minimal heat capacity lx, ly, lz = 10.0, 10.0, 10.0; # Length of computational domain in dimension x, y and z # Numerics nx, ny, nz = 128, 128, 128; # Number of gridpoints in dimensions x, y and z nt = 10000; # Number of time steps me, dims = init_global_grid(nx, ny, nz); # Initialize the implicit global grid dx = lx/(nx_g()-1); # Space step in dimension x dy = ly/(ny_g()-1); # ... in dimension y dz = lz/(nz_g()-1); # ... in dimension z # Array initializations T = zeros(nx, ny, nz ); Cp = zeros(nx, ny, nz ); dTedt = zeros(nx-2, ny-2, nz-2); qx = zeros(nx-1, ny-2, nz-2); qy = zeros(nx-2, ny-1, nz-2); qz = zeros(nx-2, ny-2, nz-1); # Initial conditions (heat capacity and temperature with two Gaussian anomalies each) Cp .= cp_min .+ [5*exp(-((x_g(ix,dx,Cp)-lx/1.5))^2-((y_g(iy,dy,Cp)-ly/2))^2-((z_g(iz,dz,Cp)-lz/1.5))^2) + 5*exp(-((x_g(ix,dx,Cp)-lx/3.0))^2-((y_g(iy,dy,Cp)-ly/2))^2-((z_g(iz,dz,Cp)-lz/1.5))^2) for ix=1:size(T,1), iy=1:size(T,2), iz=1:size(T,3)] T .= [100*exp(-((x_g(ix,dx,T)-lx/2)/2)^2-((y_g(iy,dy,T)-ly/2)/2)^2-((z_g(iz,dz,T)-lz/3.0)/2)^2) + 50*exp(-((x_g(ix,dx,T)-lx/2)/2)^2-((y_g(iy,dy,T)-ly/2)/2)^2-((z_g(iz,dz,T)-lz/1.5)/2)^2) for ix=1:size(T,1), iy=1:size(T,2), iz=1:size(T,3)] # Time loop dt = min(dx*dx,dy*dy,dz*dz)*cp_min/lam/8.1; # Time step for the 3D Heat diffusion for it = 1:nt qx .= -lam.*d_xi(T)./dx; # Fourier's law of heat conduction: q_x = -λ δT/δx qy .= -lam.*d_yi(T)./dy; # ... q_y = -λ δT/δy qz .= -lam.*d_zi(T)./dz; # ... q_z = -λ δT/δz dTedt .= 1.0./inn(Cp).*(-d_xa(qx)./dx .- d_ya(qy)./dy .- d_za(qz)./dz); # Conservation of energy: δT/δt = 1/cₚ (-δq_x/δx - δq_y/dy - δq_z/dz) T[2:end-1,2:end-1,2:end-1] .= inn(T) .+ dt.*dTedt; # Update of temperature T_new = T_old + δT/δt update_halo!(T); # Update the halo of T end finalize_global_grid(); # Finalize the implicit global grid end diffusion3D()
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
4821
using CUDA # Import CUDA before ImplicitGlobalGrid to activate its CUDA device support using ImplicitGlobalGrid, Plots @views d_xa(A) = A[2:end , : , : ] .- A[1:end-1, : , : ]; @views d_xi(A) = A[2:end ,2:end-1,2:end-1] .- A[1:end-1,2:end-1,2:end-1]; @views d_ya(A) = A[ : ,2:end , : ] .- A[ : ,1:end-1, : ]; @views d_yi(A) = A[2:end-1,2:end ,2:end-1] .- A[2:end-1,1:end-1,2:end-1]; @views d_za(A) = A[ : , : ,2:end ] .- A[ : , : ,1:end-1]; @views d_zi(A) = A[2:end-1,2:end-1,2:end ] .- A[2:end-1,2:end-1,1:end-1]; @views inn(A) = A[2:end-1,2:end-1,2:end-1] @views function diffusion3D() # Physics lam = 1.0; # Thermal conductivity cp_min = 1.0; # Minimal heat capacity lx, ly, lz = 10.0, 10.0, 10.0; # Length of computational domain in dimension x, y and z # Numerics nx, ny, nz = 256, 256, 256; # Number of gridpoints in dimensions x, y and z nt = 100000; # Number of time steps me, dims = init_global_grid(nx, ny, nz); # Initialize the implicit global grid dx = lx/(nx_g()-1); # Space step in dimension x dy = ly/(ny_g()-1); # ... in dimension y dz = lz/(nz_g()-1); # ... in dimension z # Array initializations T = CUDA.zeros(Float64, nx, ny, nz ); Cp = CUDA.zeros(Float64, nx, ny, nz ); dTedt = CUDA.zeros(Float64, nx-2, ny-2, nz-2); qx = CUDA.zeros(Float64, nx-1, ny-2, nz-2); qy = CUDA.zeros(Float64, nx-2, ny-1, nz-2); qz = CUDA.zeros(Float64, nx-2, ny-2, nz-1); # Initial conditions (heat capacity and temperature with two Gaussian anomalies each) Cp .= cp_min .+ CuArray([5*exp(-((x_g(ix,dx,Cp)-lx/1.5))^2-((y_g(iy,dy,Cp)-ly/2))^2-((z_g(iz,dz,Cp)-lz/1.5))^2) + 5*exp(-((x_g(ix,dx,Cp)-lx/3.0))^2-((y_g(iy,dy,Cp)-ly/2))^2-((z_g(iz,dz,Cp)-lz/1.5))^2) for ix=1:size(T,1), iy=1:size(T,2), iz=1:size(T,3)]) T .= CuArray([100*exp(-((x_g(ix,dx,T)-lx/2)/2)^2-((y_g(iy,dy,T)-ly/2)/2)^2-((z_g(iz,dz,T)-lz/3.0)/2)^2) + 50*exp(-((x_g(ix,dx,T)-lx/2)/2)^2-((y_g(iy,dy,T)-ly/2)/2)^2-((z_g(iz,dz,T)-lz/1.5)/2)^2) for ix=1:size(T,1), iy=1:size(T,2), iz=1:size(T,3)]) # Preparation of visualisation gr() ENV["GKSwstype"]="nul" anim = Animation(); nx_v = (nx-2)*dims[1]; ny_v = (ny-2)*dims[2]; nz_v = (nz-2)*dims[3]; T_v = zeros(nx_v, ny_v, nz_v); T_nohalo = zeros(nx-2, ny-2, nz-2); # Time loop dt = min(dx*dx,dy*dy,dz*dz)*cp_min/lam/8.1; # Time step for the 3D Heat diffusion for it = 1:nt if mod(it, 1000) == 1 # Visualize only every 1000th time step T_nohalo .= Array(T[2:end-1,2:end-1,2:end-1]); # Copy data to CPU removing the halo. gather!(T_nohalo, T_v) # Gather data on process 0 (could be interpolated/sampled first) if (me==0) heatmap(transpose(T_v[:,ny_v÷2,:]), aspect_ratio=1); frame(anim); end # Visualize it on process 0. end qx .= -lam.*d_xi(T)./dx; # Fourier's law of heat conduction: q_x = -λ δT/δx qy .= -lam.*d_yi(T)./dy; # ... q_y = -λ δT/δy qz .= -lam.*d_zi(T)./dz; # ... q_z = -λ δT/δz dTedt .= 1.0./inn(Cp).*(-d_xa(qx)./dx .- d_ya(qy)./dy .- d_za(qz)./dz); # Conservation of energy: δT/δt = 1/cₚ (-δq_x/δx - δq_y/dy - δq_z/dz) T[2:end-1,2:end-1,2:end-1] .= inn(T) .+ dt.*dTedt; # Update of temperature T_new = T_old + δT/δt update_halo!(T); # Update the halo of T end # Postprocessing if (me==0) gif(anim, "diffusion3D.gif", fps = 15) end # Create a gif movie on process 0. if (me==0) mp4(anim, "diffusion3D.mp4", fps = 15) end # Create a mp4 movie on process 0. finalize_global_grid(); # Finalize the implicit global grid end diffusion3D()
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
3715
using CUDA # Import CUDA before ImplicitGlobalGrid to activate its CUDA device support using ImplicitGlobalGrid @views d_xa(A) = A[2:end , : , : ] .- A[1:end-1, : , : ]; @views d_xi(A) = A[2:end ,2:end-1,2:end-1] .- A[1:end-1,2:end-1,2:end-1]; @views d_ya(A) = A[ : ,2:end , : ] .- A[ : ,1:end-1, : ]; @views d_yi(A) = A[2:end-1,2:end ,2:end-1] .- A[2:end-1,1:end-1,2:end-1]; @views d_za(A) = A[ : , : ,2:end ] .- A[ : , : ,1:end-1]; @views d_zi(A) = A[2:end-1,2:end-1,2:end ] .- A[2:end-1,2:end-1,1:end-1]; @views inn(A) = A[2:end-1,2:end-1,2:end-1] @views function diffusion3D() # Physics lam = 1.0; # Thermal conductivity cp_min = 1.0; # Minimal heat capacity lx, ly, lz = 10.0, 10.0, 10.0; # Length of computational domain in dimension x, y and z # Numerics nx, ny, nz = 256, 256, 256; # Number of gridpoints in dimensions x, y and z nt = 100000; # Number of time steps me, dims = init_global_grid(nx, ny, nz); # Initialize the implicit global grid dx = lx/(nx_g()-1); # Space step in dimension x dy = ly/(ny_g()-1); # ... in dimension y dz = lz/(nz_g()-1); # ... in dimension z # Array initializations T = CUDA.zeros(Float64, nx, ny, nz ); Cp = CUDA.zeros(Float64, nx, ny, nz ); dTedt = CUDA.zeros(Float64, nx-2, ny-2, nz-2); qx = CUDA.zeros(Float64, nx-1, ny-2, nz-2); qy = CUDA.zeros(Float64, nx-2, ny-1, nz-2); qz = CUDA.zeros(Float64, nx-2, ny-2, nz-1); # Initial conditions (heat capacity and temperature with two Gaussian anomalies each) Cp .= cp_min .+ CuArray([5*exp(-((x_g(ix,dx,Cp)-lx/1.5))^2-((y_g(iy,dy,Cp)-ly/2))^2-((z_g(iz,dz,Cp)-lz/1.5))^2) + 5*exp(-((x_g(ix,dx,Cp)-lx/3.0))^2-((y_g(iy,dy,Cp)-ly/2))^2-((z_g(iz,dz,Cp)-lz/1.5))^2) for ix=1:size(T,1), iy=1:size(T,2), iz=1:size(T,3)]) T .= CuArray([100*exp(-((x_g(ix,dx,T)-lx/2)/2)^2-((y_g(iy,dy,T)-ly/2)/2)^2-((z_g(iz,dz,T)-lz/3.0)/2)^2) + 50*exp(-((x_g(ix,dx,T)-lx/2)/2)^2-((y_g(iy,dy,T)-ly/2)/2)^2-((z_g(iz,dz,T)-lz/1.5)/2)^2) for ix=1:size(T,1), iy=1:size(T,2), iz=1:size(T,3)]) # Time loop dt = min(dx*dx,dy*dy,dz*dz)*cp_min/lam/8.1; # Time step for the 3D Heat diffusion for it = 1:nt qx .= -lam.*d_xi(T)./dx; # Fourier's law of heat conduction: q_x = -λ δT/δx qy .= -lam.*d_yi(T)./dy; # ... q_y = -λ δT/δy qz .= -lam.*d_zi(T)./dz; # ... q_z = -λ δT/δz dTedt .= 1.0./inn(Cp).*(-d_xa(qx)./dx .- d_ya(qy)./dy .- d_za(qz)./dz); # Conservation of energy: δT/δt = 1/cₚ (-δq_x/δx - δq_y/dy - δq_z/dz) T[2:end-1,2:end-1,2:end-1] .= inn(T) .+ dt.*dTedt; # Update of temperature T_new = T_old + δT/δt update_halo!(T); # Update the halo of T end finalize_global_grid(); # Finalize the implicit global grid end diffusion3D()
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
1793
using CUDA # Import CUDA before ImplicitGlobalGrid to activate its CUDA device support using ImplicitGlobalGrid, Plots #(...) @views function diffusion3D() # Physics #(...) # Numerics #(...) me, dims = init_global_grid(nx, ny, nz); # Initialize the implicit global grid #(...) # Array initializations #(...) # Initial conditions (heat capacity and temperature with two Gaussian anomalies each) #(...) # Preparation of visualisation gr() ENV["GKSwstype"]="nul" anim = Animation(); nx_v = (nx-2)*dims[1]; ny_v = (ny-2)*dims[2]; nz_v = (nz-2)*dims[3]; T_v = zeros(nx_v, ny_v, nz_v); T_nohalo = zeros(nx-2, ny-2, nz-2); # Time loop #(...) for it = 1:nt if mod(it, 1000) == 1 # Visualize only every 1000th time step T_nohalo .= T[2:end-1,2:end-1,2:end-1]; # Copy data to CPU removing the halo. gather!(T_nohalo, T_v) # Gather data on process 0 (could be interpolated/sampled first) if (me==0) heatmap(transpose(T_v[:,ny_v÷2,:]), aspect_ratio=1); frame(anim); end # Visualize it on process 0. end #(...) end # Postprocessing if (me==0) gif(anim, "diffusion3D.gif", fps = 15) end # Create a gif movie on process 0. if (me==0) mp4(anim, "diffusion3D.mp4", fps = 15) end # Create a mp4 movie on process 0. finalize_global_grid(); # Finalize the implicit global grid end diffusion3D()
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
264
module ImplicitGlobalGrid_AMDGPUExt include(joinpath(@__DIR__, "..", "src", "AMDGPUExt", "shared.jl")) include(joinpath(@__DIR__, "..", "src", "AMDGPUExt", "select_device.jl")) include(joinpath(@__DIR__, "..", "src", "AMDGPUExt", "update_halo.jl")) end
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
256
module ImplicitGlobalGrid_CUDAExt include(joinpath(@__DIR__, "..", "src", "CUDAExt", "shared.jl")) include(joinpath(@__DIR__, "..", "src", "CUDAExt", "select_device.jl")) include(joinpath(@__DIR__, "..", "src", "CUDAExt", "update_halo.jl")) end
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
127
module ImplicitGlobalGrid_PolyesterExt include(joinpath(@__DIR__, "..", "src", "PolyesterExt", "memcopy_polyester.jl")) end
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
2062
module Exceptions export @ModuleInternalError, @IncoherentCallError, @NotInitializedError, @NotLoadedError, @IncoherentArgumentError, @KeywordArgumentError, @ArgumentEvaluationError, @ArgumentError export ModuleInternalError, IncoherentCallError, NotInitializedError, NotLoadedError, IncoherentArgumentError, KeywordArgumentError, ArgumentEvaluationError macro ModuleInternalError(msg) esc(:(throw(ModuleInternalError($msg)))) end macro IncoherentCallError(msg) esc(:(throw(IncoherentCallError($msg)))) end macro NotInitializedError(msg) esc(:(throw(NotInitializedError($msg)))) end macro NotLoadedError(msg) esc(:(throw(NotLoadedError($msg)))) end macro IncoherentArgumentError(msg) esc(:(throw(IncoherentArgumentError($msg)))) end macro KeywordArgumentError(msg) esc(:(throw(KeywordArgumentError($msg)))) end macro ArgumentEvaluationError(msg) esc(:(throw(ArgumentEvaluationError($msg)))) end macro ArgumentError(msg) esc(:(throw(ArgumentError($msg)))) end struct ModuleInternalError <: Exception msg::String end Base.showerror(io::IO, e::ModuleInternalError) = print(io, "ModuleInternalError: ", e.msg) struct IncoherentCallError <: Exception msg::String end Base.showerror(io::IO, e::IncoherentCallError) = print(io, "IncoherentCallError: ", e.msg) struct NotInitializedError <: Exception msg::String end Base.showerror(io::IO, e::NotInitializedError) = print(io, "NotInitializedError: ", e.msg) struct NotLoadedError <: Exception msg::String end Base.showerror(io::IO, e::NotLoadedError) = print(io, "NotLoadedError: ", e.msg) struct IncoherentArgumentError <: Exception msg::String end Base.showerror(io::IO, e::IncoherentArgumentError) = print(io, "IncoherentArgumentError: ", e.msg) struct KeywordArgumentError <: Exception msg::String end Base.showerror(io::IO, e::KeywordArgumentError) = print(io, "KeywordArgumentError: ", e.msg) struct ArgumentEvaluationError <: Exception msg::String end Base.showerror(io::IO, e::ArgumentEvaluationError) = print(io, "ArgumentEvaluationError: ", e.msg) end # Module Exceptions
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
1927
""" Module ImplicitGlobalGrid Renders the distributed parallelization of stencil-based GPU and CPU applications on a regular staggered grid almost trivial and enables close to ideal weak scaling of real-world applications on thousands of GPUs. # General overview and examples https://github.com/eth-cscs/ImplicitGlobalGrid.jl # Functions - [`init_global_grid`](@ref) - [`finalize_global_grid`](@ref) - [`update_halo!`](@ref) - [`gather!`](@ref) - [`select_device`](@ref) - [`nx_g`](@ref) - [`ny_g`](@ref) - [`nz_g`](@ref) - [`x_g`](@ref) - [`y_g`](@ref) - [`z_g`](@ref) - [`tic`](@ref) - [`toc`](@ref) To see a description of a function type `?<functionname>`. !!! note "Activation of device support" The support for a device type (CUDA or AMDGPU) is activated by importing the corresponding module (CUDA or AMDGPU) before importing ImplicitGlobalGrid (the corresponding extension will be loaded). !!! note "Performance note" If the system supports CUDA-aware MPI (for Nvidia GPUs) or ROCm-aware MPI (for AMD GPUs), it may be activated for ImplicitGlobalGrid by setting one of the following environment variables (at latest before the call to `init_global_grid`): ```shell shell> export IGG_CUDAAWARE_MPI=1 ``` ```shell shell> export IGG_ROCMAWARE_MPI=1 ``` """ module ImplicitGlobalGrid ## Include of exception module include("Exceptions.jl"); using .Exceptions ## Include of shared constant parameters, types and syntax sugar include("shared.jl") ## Alphabetical include of defaults for extensions include("defaults_shared.jl") include(joinpath("AMDGPUExt", "defaults.jl")) include(joinpath("CUDAExt", "defaults.jl")) include(joinpath("PolyesterExt", "memcopy_polyester_default.jl")) ## Alphabetical include of files include("finalize_global_grid.jl") include("gather.jl") include("init_global_grid.jl") include("select_device.jl") include("tools.jl") include("update_halo.jl") end
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
460
# shared.jl is_loaded(arg) = false #TODO: this would not work as it should be the caller module...: (Base.get_extension(@__MODULE__, ext) !== nothing) is_functional(arg) = false function register end # update_halo.jl function gpusendbuf end function gpurecvbuf end function gpusendbuf_flat end function gpurecvbuf_flat end function write_d2x! end function read_x2d! end function write_d2h_async! end function read_h2d_async! end function gpumemcopy! end
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
907
export finalize_global_grid """ finalize_global_grid() finalize_global_grid(;finalize_MPI=true) Finalize the global grid (and also MPI by default). # Arguments !!! note "Advanced keyword arguments" - `finalize_MPI::Bool=true`: whether to finalize MPI (`true`) or not (`false`). See also: [`init_global_grid`](@ref) """ function finalize_global_grid(;finalize_MPI::Bool=true) check_initialized(); free_update_halo_buffers(); if (finalize_MPI) if (!MPI.Initialized()) error("MPI cannot be finalized as it has not been initialized. "); end # This case should never occur as init_global_grid() must enforce that after a call to it, MPI is always initialized. if (MPI.Finalized()) error("MPI is already finalized. Set the argument 'finalize_MPI=false'."); end MPI.Finalize(); end set_global_grid(GLOBAL_GRID_NULL); GC.gc(); return nothing end
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
2904
export gather! """ gather!(A, A_global) gather!(A, A_global; root=0) !!! note "Advanced" gather!(A, A_global, comm; root=0) Gather an array `A` from each member of the Cartesian grid of MPI processes into one large array `A_global` on the root process (default: `0`). The size of the global array `size(A_global)` must be equal to the product of `size(A)` and `dims`, where `dims` is the number of processes in each dimension of the Cartesian grid, defined in [`init_global_grid`](@ref). !!! note "Advanced" If the argument `comm` is given, then this communicator is used for the gather operation and `dims` extracted from it. !!! note "Memory requirements" The memory for the global array only needs to be allocated on the root process; the argument `A_global` can be `nothing` on the other processes. """ function gather!(A::AbstractArray{T}, A_global::Union{AbstractArray{T,N},Nothing}; root::Integer=0) where {T,N} check_initialized(); gather!(A, A_global, comm(); root=root); return nothing end function gather!(A::AbstractArray{T,N2}, A_global::Union{AbstractArray{T,N},Nothing}, comm::MPI.Comm; root::Integer=0) where {T,N,N2} if MPI.Comm_rank(comm) == root if (A_global === nothing) error("The input argument `A_global` can't be `nothing` on the root.") end if (N2 > N) error("The number of dimension of `A` must be less than or equal to the number of dimensions of `A_global`.") end dims, _, _ = MPI.Cart_get(comm) if (N > length(dims)) error("The number of dimensions of `A_global` must be less than or equal to the number of dimensions of the Cartesian grid of MPI processes.") end dims = Tuple(dims[1:N]) size_A = (size(A)..., (1 for _ in N2+1:N)...) if (size(A_global) != (dims .* size_A)) error("The size of the global array `size(A_global)` must be equal to the product of `size(A)` and `dims`.") end # Make subtype for gather offset = Tuple(0 for _ in 1:N) subtype = MPI.Types.create_subarray(size(A_global), size_A, offset, MPI.Datatype(eltype(A_global))) subtype = MPI.Types.create_resized(subtype, 0, size(A, 1) * Base.elsize(A_global)) MPI.Types.commit!(subtype) # Make VBuffer for collective communication counts = fill(Cint(1), reverse(dims)) # Gather one subarray from each MPI rank displs = zeros(Cint, reverse(dims)) # Reverse dims since MPI Cart comm is row-major csizes = cumprod(size_A[2:end] .* dims[1:end-1]) strides = (1, csizes...) for I in CartesianIndices(displs) offset = reverse(Tuple(I - oneunit(I))) displs[I] = sum(offset .* strides) end recvbuf = MPI.VBuffer(A_global, vec(counts), vec(displs), subtype) MPI.Gatherv!(A, recvbuf, comm; root) else MPI.Gatherv!(A, nothing, comm; root) end return end
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
11935
export init_global_grid """ init_global_grid(nx, ny, nz) me, dims, nprocs, coords, comm_cart = init_global_grid(nx, ny, nz; <keyword arguments>) Initialize a Cartesian grid of MPI processes (and also MPI itself by default) defining implicitely a global grid. # Arguments - {`nx`|`ny`|`nz`}`::Integer`: the number of elements of the local grid in dimension {x|y|z}. - {`dimx`|`dimy`|`dimz`}`::Integer=0`: the desired number of processes in dimension {x|y|z}. By default, (value `0`) the process topology is created as compact as possible with the given constraints. This is handled by the MPI implementation which is installed on your system. For more information, refer to the specifications of `MPI_Dims_create` in the corresponding documentation. - {`periodx`|`periody`|`periodz`}`::Integer=0`: whether the grid is periodic (`1`) or not (`0`) in dimension {x|y|z}. - `quiet::Bool=false`: whether to suppress printing information like the size of the global grid (`true`) or not (`false`). !!! note "Advanced keyword arguments" - `overlaps::Tuple{Int,Int,Int}=(2,2,2)`: the number of elements adjacent local grids overlap in dimension x, y and z. By default (value `(2,2,2)`), an array `A` of size (`nx`, `ny`, `nz`) on process 1 (`A_1`) overlaps the corresponding array `A` on process 2 (`A_2`) by `2` indices if the two processes are adjacent. E.g., if `overlaps[1]=2` and process 2 is the right neighbor of process 1 in dimension x, then `A_1[end-1:end,:,:]` overlaps `A_2[1:2,:,:]`. That means, after every call `update_halo!(A)`, we have `all(A_1[end-1:end,:,:] .== A_2[1:2,:,:])` (`A_1[end,:,:]` is the halo of process 1 and `A_2[1,:,:]` is the halo of process 2). The analog applies for the dimensions y and z. - `halowidths::Tuple{Int,Int,Int}=max.(1,overlaps.÷2)`: the default width of an array's halo in dimension x, y and z (must be greater than 1). The default can be overwritten per array in the function [`update_halo`](@ref). - `disp::Integer=1`: the displacement argument to `MPI.Cart_shift` in order to determine the neighbors. - `reorder::Integer=1`: the reorder argument to `MPI.Cart_create` in order to create the Cartesian process topology. - `comm::MPI.Comm=MPI.COMM_WORLD`: the input communicator argument to `MPI.Cart_create` in order to create the Cartesian process topology. - `init_MPI::Bool=true`: whether to initialize MPI (`true`) or not (`false`). - `device_type::String="auto"`: the type of the device to be used if available: `"CUDA"`, `"AMDGPU"`, `"none"` or `"auto"`. Set `device_type="none"` if you want to use only CPUs on a system having also GPUs. If `device_type` is `"auto"` (default), it is automatically determined, depending on which of the modules used for programming the devices (CUDA.jl or AMDGPU.jl) was imported before ImplicitGlobalGrid; if both were imported, an error will be given if `device_type` is set as `"auto"`. - `select_device::Bool=true`: whether to automatically select the device (GPU) (`true`) or not (`false`) if CUDA or AMDGPU was imported and `device_type` is not `"none"`. If `true`, it selects the device corresponding to the node-local MPI rank. This method of device selection suits both single and multi-device compute nodes and is recommended in general. It is also the default method of device selection of the *function* [`select_device`](@ref). For more information, refer to the documentation of MPI.jl / MPI. # Return values - `me`: the MPI rank of the process. - `dims`: the number of processes in each dimension. - `nprocs`: the number of processes. - `coords`: the Cartesian coordinates of the process. - `comm_cart`: the MPI communicator of the created Cartesian process topology. # Typical use cases init_global_grid(nx, ny, nz) # Basic call (no optional in and output arguments). me, = init_global_grid(nx, ny, nz) # Capture 'me' (note the ','!). me, dims = init_global_grid(nx, ny, nz) # Capture 'me' and 'dims'. init_global_grid(nx, ny, nz; dimx=2, dimy=2) # Fix the number of processes in the dimensions x and y of the Cartesian grid of MPI processes to 2 (the number of processes can vary only in the dimension z). init_global_grid(nx, ny, nz; periodz=1) # Make the boundaries in dimension z periodic. See also: [`finalize_global_grid`](@ref), [`select_device`](@ref) """ function init_global_grid(nx::Integer, ny::Integer, nz::Integer; dimx::Integer=0, dimy::Integer=0, dimz::Integer=0, periodx::Integer=0, periody::Integer=0, periodz::Integer=0, overlaps::Tuple{Int,Int,Int}=(2,2,2), halowidths::Tuple{Int,Int,Int}=max.(1,overlaps.÷2), disp::Integer=1, reorder::Integer=1, comm::MPI.Comm=MPI.COMM_WORLD, init_MPI::Bool=true, device_type::String=DEVICE_TYPE_AUTO, select_device::Bool=true, quiet::Bool=false) if grid_is_initialized() error("The global grid has already been initialized.") end set_cuda_loaded() set_cuda_functional() set_amdgpu_loaded() set_amdgpu_functional() nxyz = [nx, ny, nz]; dims = [dimx, dimy, dimz]; periods = [periodx, periody, periodz]; overlaps = [overlaps...]; halowidths = [halowidths...]; cuda_enabled = false amdgpu_enabled = false cudaaware_MPI = [false, false, false] amdgpuaware_MPI = [false, false, false] use_polyester = [false, false, false] if haskey(ENV, "IGG_LOOPVECTORIZATION") error("Environment variable IGG_LOOPVECTORIZATION is not supported anymore. Use IGG_USE_POLYESTER instead.") end if haskey(ENV, "IGG_CUDAAWARE_MPI") cudaaware_MPI .= (parse(Int64, ENV["IGG_CUDAAWARE_MPI"]) > 0); end if haskey(ENV, "IGG_ROCMAWARE_MPI") amdgpuaware_MPI .= (parse(Int64, ENV["IGG_ROCMAWARE_MPI"]) > 0); end if haskey(ENV, "IGG_USE_POLYESTER") use_polyester .= (parse(Int64, ENV["IGG_USE_POLYESTER"]) > 0); end if none(cudaaware_MPI) if haskey(ENV, "IGG_CUDAAWARE_MPI_DIMX") cudaaware_MPI[1] = (parse(Int64, ENV["IGG_CUDAAWARE_MPI_DIMX"]) > 0); end if haskey(ENV, "IGG_CUDAAWARE_MPI_DIMY") cudaaware_MPI[2] = (parse(Int64, ENV["IGG_CUDAAWARE_MPI_DIMY"]) > 0); end if haskey(ENV, "IGG_CUDAAWARE_MPI_DIMZ") cudaaware_MPI[3] = (parse(Int64, ENV["IGG_CUDAAWARE_MPI_DIMZ"]) > 0); end end if none(amdgpuaware_MPI) if haskey(ENV, "IGG_ROCMAWARE_MPI_DIMX") amdgpuaware_MPI[1] = (parse(Int64, ENV["IGG_ROCMAWARE_MPI_DIMX"]) > 0); end if haskey(ENV, "IGG_ROCMAWARE_MPI_DIMY") amdgpuaware_MPI[2] = (parse(Int64, ENV["IGG_ROCMAWARE_MPI_DIMY"]) > 0); end if haskey(ENV, "IGG_ROCMAWARE_MPI_DIMZ") amdgpuaware_MPI[3] = (parse(Int64, ENV["IGG_ROCMAWARE_MPI_DIMZ"]) > 0); end end if all(use_polyester) if haskey(ENV, "IGG_USE_POLYESTER_DIMX") use_polyester[1] = (parse(Int64, ENV["IGG_USE_POLYESTER_DIMX"]) > 0); end if haskey(ENV, "IGG_USE_POLYESTER_DIMY") use_polyester[2] = (parse(Int64, ENV["IGG_USE_POLYESTER_DIMY"]) > 0); end if haskey(ENV, "IGG_USE_POLYESTER_DIMZ") use_polyester[3] = (parse(Int64, ENV["IGG_USE_POLYESTER_DIMZ"]) > 0); end end if !(device_type in [DEVICE_TYPE_NONE, DEVICE_TYPE_AUTO, DEVICE_TYPE_CUDA, DEVICE_TYPE_AMDGPU]) error("Argument `device_type`: invalid value obtained ($device_type). Valid values are: $DEVICE_TYPE_CUDA, $DEVICE_TYPE_AMDGPU, $DEVICE_TYPE_NONE, $DEVICE_TYPE_AUTO") end if ((device_type == DEVICE_TYPE_AUTO) && cuda_loaded() && cuda_functional() && amdgpu_loaded() && amdgpu_functional()) error("Automatic detection of the device type to be used not possible: both CUDA and AMDGPU extensions are loaded and functional. Set keyword argument `device_type` to $DEVICE_TYPE_CUDA or $DEVICE_TYPE_AMDGPU.") end if (device_type != DEVICE_TYPE_NONE) if (device_type in [DEVICE_TYPE_CUDA, DEVICE_TYPE_AUTO]) cuda_enabled = cuda_loaded() && cuda_functional() end # NOTE: cuda could be enabled/disabled depending on some additional criteria. if (device_type in [DEVICE_TYPE_AMDGPU, DEVICE_TYPE_AUTO]) amdgpu_enabled = amdgpu_loaded() && amdgpu_functional() end # NOTE: amdgpu could be enabled/disabled depending on some additional criteria. end if (any(nxyz .< 1)) error("Invalid arguments: nx, ny, and nz cannot be less than 1."); end if (any(dims .< 0)) error("Invalid arguments: dimx, dimy, and dimz cannot be negative."); end if (any(periods .∉ ((0,1),))) error("Invalid arguments: periodx, periody, and periodz must be either 0 or 1."); end if (any(halowidths .< 1)) error("Invalid arguments: halowidths cannot be less than 1."); end if (nx==1) error("Invalid arguments: nx can never be 1.") end if (ny==1 && nz>1) error("Invalid arguments: ny cannot be 1 if nz is greater than 1.") end if (any((nxyz .== 1) .& (dims .>1 ))) error("Incoherent arguments: if nx, ny, or nz is 1, then the corresponding dimx, dimy or dimz must not be set (or set 0 or 1)."); end if (any((nxyz .< 2 .* overlaps .- 1) .& (periods .> 0))) error("Incoherent arguments: if nx, ny, or nz is smaller than 2*overlaps[1]-1, 2*overlaps[2]-1 or 2*overlaps[3]-1, respectively, then the corresponding periodx, periody or periodz must not be set (or set 0)."); end if (any((overlaps .> 0) .& (halowidths .> overlaps.÷2))) error("Incoherent arguments: if overlap is greater than 0, then halowidth cannot be greater than overlap÷2, in each dimension."); end dims[(nxyz.==1).&(dims.==0)] .= 1; # Setting any of nxyz to 1, means that the corresponding dimension must also be 1 in the global grid. Thus, the corresponding dims entry must be 1. if (init_MPI) # NOTE: init MPI only, once the input arguments have been checked. if (MPI.Initialized()) error("MPI is already initialized. Set the argument 'init_MPI=false'."); end MPI.Init(); else if (!MPI.Initialized()) error("MPI has not been initialized beforehand. Remove the argument 'init_MPI=false'."); end # Ensure that MPI is always initialized after init_global_grid(). end nprocs = MPI.Comm_size(comm); MPI.Dims_create!(nprocs, dims); comm_cart = MPI.Cart_create(comm, dims, periods, reorder); me = MPI.Comm_rank(comm_cart); coords = MPI.Cart_coords(comm_cart); neighbors = fill(MPI.PROC_NULL, NNEIGHBORS_PER_DIM, NDIMS_MPI); for i = 1:NDIMS_MPI neighbors[:,i] .= MPI.Cart_shift(comm_cart, i-1, disp); end nxyz_g = dims.*(nxyz.-overlaps) .+ overlaps.*(periods.==0); # E.g. for dimension x with ol=2 and periodx=0: dimx*(nx-2)+2 set_global_grid(GlobalGrid(nxyz_g, nxyz, dims, overlaps, halowidths, nprocs, me, coords, neighbors, periods, disp, reorder, comm_cart, cuda_enabled, amdgpu_enabled, cudaaware_MPI, amdgpuaware_MPI, use_polyester, quiet)); cuda_support_string = (cuda_enabled && all(cudaaware_MPI)) ? "CUDA-aware" : (cuda_enabled && any(cudaaware_MPI)) ? "CUDA(-aware)" : (cuda_enabled) ? "CUDA" : ""; amdgpu_support_string = (amdgpu_enabled && all(amdgpuaware_MPI)) ? "AMDGPU-aware" : (amdgpu_enabled && any(amdgpuaware_MPI)) ? "AMDGPU(-aware)" : (amdgpu_enabled) ? "AMDGPU" : ""; gpu_support_string = join(filter(!isempty, [cuda_support_string, amdgpu_support_string]), ", "); support_string = isempty(gpu_support_string) ? "none" : gpu_support_string; if (!quiet && me==0) println("Global grid: $(nxyz_g[1])x$(nxyz_g[2])x$(nxyz_g[3]) (nprocs: $nprocs, dims: $(dims[1])x$(dims[2])x$(dims[3]); device support: $support_string)"); end if ((cuda_enabled || amdgpu_enabled) && select_device) _select_device() end init_timing_functions(); return me, dims, nprocs, coords, comm_cart; # The typical use case requires only these variables; the remaining can be obtained calling get_global_grid() if needed. end # Make sure that timing functions which must be fast at the first user call are already compiled now. function init_timing_functions() tic(); toc(); end
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
1545
export select_device """ select_device() Select the device (GPU) corresponding to the node-local MPI rank and return its ID. !!! note "device indexing" - CUDA.jl device indexing is 0-based. - AMDGPU.jl device indexing is 1-based. - The returned ID is therefore 0-based for CUDA and 1-based for AMDGPU. See also: [`init_global_grid`](@ref) """ function select_device() check_initialized() if (cuda_enabled() && amdgpu_enabled()) error("Cannot select a device because both CUDA and AMDGPU are enabled (meaning that both modules were imported before ImplicitGlobalGrid).") end if cuda_enabled() || amdgpu_enabled() if cuda_enabled() @assert cuda_functional() nb_devices = nb_cudevices() elseif amdgpu_enabled() @assert amdgpu_functional() nb_devices = nb_rocdevices() end comm_l = MPI.Comm_split_type(comm(), MPI.COMM_TYPE_SHARED, me()) if (MPI.Comm_size(comm_l) > nb_devices) error("More processes have been launched per node than there are GPUs available."); end me_l = MPI.Comm_rank(comm_l) device_id = amdgpu_enabled() ? me_l+1 : me_l if cuda_enabled() cudevice!(device_id) elseif amdgpu_enabled() rocdevice!(device_id) end return device_id else error("Cannot select a device because neither CUDA nor AMDGPU is enabled (meaning that the corresponding module was not imported before ImplicitGlobalGrid).") end end _select_device() = select_device()
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
8032
import MPI using Base.Threads ##------------------------------------ ## HANDLING OF CUDA AND AMDGPU SUPPORT let global cuda_loaded, cuda_functional, amdgpu_loaded, amdgpu_functional, set_cuda_loaded, set_cuda_functional, set_amdgpu_loaded, set_amdgpu_functional _cuda_loaded::Bool = false _cuda_functional::Bool = false _amdgpu_loaded::Bool = false _amdgpu_functional::Bool = false cuda_loaded()::Bool = _cuda_loaded cuda_functional()::Bool = _cuda_functional amdgpu_loaded()::Bool = _amdgpu_loaded amdgpu_functional()::Bool = _amdgpu_functional set_cuda_loaded() = (_cuda_loaded = is_loaded(Val(:ImplicitGlobalGrid_CUDAExt))) set_cuda_functional() = (_cuda_functional = is_functional(Val(:CUDA))) set_amdgpu_loaded() = (_amdgpu_loaded = is_loaded(Val(:ImplicitGlobalGrid_AMDGPUExt))) set_amdgpu_functional() = (_amdgpu_functional = is_functional(Val(:AMDGPU))) end ##-------------------- ## CONSTANT PARAMETERS const NDIMS_MPI = 3 # Internally, we set the number of dimensions always to 3 for calls to MPI. This ensures a fixed size for MPI coords, neigbors, etc and in general a simple, easy to read code. const NNEIGHBORS_PER_DIM = 2 # Number of neighbors per dimension (left neighbor + right neighbor). const GG_ALLOC_GRANULARITY = 32 # Internal buffers are allocated with a granulariy of GG_ALLOC_GRANULARITY elements in order to ensure correct reinterpretation when used for different types and to reduce amount of re-allocations. const GG_THREADCOPY_THRESHOLD = 32768 # When Polyester is deactivated, then the GG_THREADCOPY_THRESHOLD defines the size in bytes upon which memory copy is performed with multiple threads. const DEVICE_TYPE_NONE = "none" const DEVICE_TYPE_AUTO = "auto" const DEVICE_TYPE_CUDA = "CUDA" const DEVICE_TYPE_AMDGPU = "AMDGPU" const SUPPORTED_DEVICE_TYPES = [DEVICE_TYPE_CUDA, DEVICE_TYPE_AMDGPU] ##------ ## TYPES const GGInt = Cint const GGNumber = Number const GGArray{T,N} = DenseArray{T,N} # TODO: was Union{Array{T,N}, CuArray{T,N}, ROCArray{T,N}} const GGField{T,N,T_array} = NamedTuple{(:A, :halowidths), Tuple{T_array, Tuple{GGInt,GGInt,GGInt}}} where {T_array<:GGArray{T,N}} const GGFieldConvertible{T,N,T_array} = NamedTuple{(:A, :halowidths), <:Tuple{T_array, Tuple{T2,T2,T2}}} where {T_array<:GGArray{T,N}, T2<:Integer} const GGField{}(t::NamedTuple) = GGField{eltype(t.A),ndims(t.A),typeof(t.A)}((t.A, GGInt.(t.halowidths))) const CPUField{T,N} = GGField{T,N,Array{T,N}} "An GlobalGrid struct contains information on the grid and the corresponding MPI communicator." # Note: type GlobalGrid is immutable, i.e. users can only read, but not modify it (except the actual entries of arrays can be modified, e.g. dims .= dims - useful for writing tests) struct GlobalGrid nxyz_g::Vector{GGInt} nxyz::Vector{GGInt} dims::Vector{GGInt} overlaps::Vector{GGInt} halowidths::Vector{GGInt} nprocs::GGInt me::GGInt coords::Vector{GGInt} neighbors::Array{GGInt, NNEIGHBORS_PER_DIM} periods::Vector{GGInt} disp::GGInt reorder::GGInt comm::MPI.Comm cuda_enabled::Bool amdgpu_enabled::Bool cudaaware_MPI::Vector{Bool} amdgpuaware_MPI::Vector{Bool} use_polyester::Vector{Bool} quiet::Bool end const GLOBAL_GRID_NULL = GlobalGrid(GGInt[-1,-1,-1], GGInt[-1,-1,-1], GGInt[-1,-1,-1], GGInt[-1,-1,-1], GGInt[-1,-1,-1], -1, -1, GGInt[-1,-1,-1], GGInt[-1 -1 -1; -1 -1 -1], GGInt[-1,-1,-1], -1, -1, MPI.COMM_NULL, false, false, [false,false,false], [false,false,false], [false,false,false], false) # Macro to switch on/off check_initialized() for performance reasons (potentially relevant for tools.jl). macro check_initialized() :(check_initialized();) end #FIXME: Alternative: macro check_initialized() end let global global_grid, set_global_grid, grid_is_initialized, check_initialized, get_global_grid _global_grid::GlobalGrid = GLOBAL_GRID_NULL global_grid()::GlobalGrid = (@check_initialized(); _global_grid::GlobalGrid) # Thanks to the call to check_initialized, we can be sure that _global_grid is defined and therefore must be of type GlobalGrid. set_global_grid(gg::GlobalGrid) = (_global_grid = gg;) grid_is_initialized() = (_global_grid.nprocs > 0) check_initialized() = if !grid_is_initialized() error("No function of the module can be called before init_global_grid() or after finalize_global_grid().") end "Return a deep copy of the global grid." get_global_grid() = deepcopy(_global_grid) end ##------------- ## SYNTAX SUGAR macro require(condition) esc(:( if !($condition) error("Pre-test requirement not met: $condition") end )) end # Verify a condition required for a unit test (in the unit test results, this should not be treated as a unit test). longnameof(f) = "$(parentmodule(f)).$(nameof(f))" isnothing(x::Any) = x === nothing ? true : false # To ensure compatibility for Julia >=v1 none(x::AbstractArray{Bool}) = all(x.==false) me() = global_grid().me comm() = global_grid().comm ol(dim::Integer) = global_grid().overlaps[dim] ol(dim::Integer, A::GGArray) = global_grid().overlaps[dim] + (size(A,dim) - global_grid().nxyz[dim]) ol(A::GGArray) = (ol(dim, A) for dim in 1:ndims(A)) hw_default() = global_grid().halowidths neighbors(dim::Integer) = global_grid().neighbors[:,dim] neighbor(n::Integer, dim::Integer) = global_grid().neighbors[n,dim] cuda_enabled() = global_grid().cuda_enabled amdgpu_enabled() = global_grid().amdgpu_enabled cudaaware_MPI() = global_grid().cudaaware_MPI cudaaware_MPI(dim::Integer) = global_grid().cudaaware_MPI[dim] amdgpuaware_MPI() = global_grid().amdgpuaware_MPI amdgpuaware_MPI(dim::Integer) = global_grid().amdgpuaware_MPI[dim] use_polyester() = global_grid().use_polyester use_polyester(dim::Integer) = global_grid().use_polyester[dim] has_neighbor(n::Integer, dim::Integer) = neighbor(n, dim) != MPI.PROC_NULL any_array(fields::GGField...) = any([is_array(A.A) for A in fields]) any_cuarray(fields::GGField...) = any([is_cuarray(A.A) for A in fields]) any_rocarray(fields::GGField...) = any([is_rocarray(A.A) for A in fields]) all_arrays(fields::GGField...) = all([is_array(A.A) for A in fields]) all_cuarrays(fields::GGField...) = all([is_cuarray(A.A) for A in fields]) all_rocarrays(fields::GGField...) = all([is_rocarray(A.A) for A in fields]) is_array(A::GGArray) = typeof(A) <: Array ##-------------------------------------------------------------------------------- ## FUNCTIONS FOR WRAPPING ARRAYS AND FIELDS AND DEFINE ARRAY PROPERTY BASE METHODS wrap_field(A::GGField) = A wrap_field(A::GGFieldConvertible) = GGField(A) wrap_field(A::Array, hw::Tuple) = CPUField{eltype(A), ndims(A)}((A, hw)) wrap_field(A::GGArray, hw::Integer...) = wrap_field(A, hw) wrap_field(A::GGArray) = wrap_field(A, hw_default()...) Base.size(A::Union{GGField, CPUField}) = Base.size(A.A) Base.size(A::Union{GGField, CPUField}, args...) = Base.size(A.A, args...) Base.length(A::Union{GGField, CPUField}) = Base.length(A.A) Base.ndims(A::Union{GGField, CPUField}) = Base.ndims(A.A) Base.eltype(A::Union{GGField, CPUField}) = Base.eltype(A.A) ##------------------------------------------ ## CUDA AND AMDGPU COMMON EXTENSION DEFAULTS # TODO: this should not be required as only called from the extensions #function register end
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
5467
export nx_g, ny_g, nz_g, x_g, y_g, z_g, tic, toc macro nx_g() esc(:( global_grid().nxyz_g[1] )) end macro ny_g() esc(:( global_grid().nxyz_g[2] )) end macro nz_g() esc(:( global_grid().nxyz_g[3] )) end macro nx() esc(:( global_grid().nxyz[1] )) end macro ny() esc(:( global_grid().nxyz[2] )) end macro nz() esc(:( global_grid().nxyz[3] )) end macro coordx() esc(:( global_grid().coords[1] )) end macro coordy() esc(:( global_grid().coords[2] )) end macro coordz() esc(:( global_grid().coords[3] )) end macro olx() esc(:( global_grid().overlaps[1] )) end macro oly() esc(:( global_grid().overlaps[2] )) end macro olz() esc(:( global_grid().overlaps[3] )) end macro periodx() esc(:( convert(Bool, global_grid().periods[1]) )) end macro periody() esc(:( convert(Bool, global_grid().periods[2]) )) end macro periodz() esc(:( convert(Bool, global_grid().periods[3]) )) end """ nx_g() Return the size of the global grid in dimension x. """ nx_g()::GGInt = @nx_g() """ ny_g() Return the size of the global grid in dimension y. """ ny_g()::GGInt = @ny_g() """ nz_g() Return the size of the global grid in dimension z. """ nz_g()::GGInt = @nz_g() """ nx_g(A) Return the size of array `A` in the global grid in dimension x. """ nx_g(A::GGArray)::GGInt = @nx_g() + (size(A,1)-@nx()) """ ny_g(A) Return the size of array `A` in the global grid in dimension y. """ ny_g(A::GGArray)::GGInt = @ny_g() + (size(A,2)-@ny()) """ nz_g(A) Return the size of array `A` in the global grid in dimension z. """ nz_g(A::GGArray)::GGInt = @nz_g() + (size(A,3)-@nz()) """ x_g(ix, dx, A) Return the global x-coordinate for the element `ix` in the local array `A` (`dx` is the space step between the elements). # Examples ```jldoctest julia> using ImplicitGlobalGrid julia> lx=4; nx=3; ny=3; nz=3; julia> init_global_grid(nx, ny, nz); Global grid: 3x3x3 (nprocs: 1, dims: 1x1x1) julia> dx = lx/(nx_g()-1) 2.0 julia> A = zeros(nx,ny,nz); julia> Vx = zeros(nx+1,ny,nz); julia> [x_g(ix, dx, A) for ix=1:size(A, 1)] 3-element Vector{Float64}: 0.0 2.0 4.0 julia> [x_g(ix, dx, Vx) for ix=1:size(Vx, 1)] 4-element Vector{Float64}: -1.0 1.0 3.0 5.0 julia> finalize_global_grid() ``` """ function x_g(ix::Integer, dx::GGNumber, A::GGArray)::GGNumber x0 = 0.5*(@nx()-size(A,1))*dx; x = (@coordx()*(@nx()-@olx()) + ix-1)*dx + x0; if @periodx() x = x - dx; # The first cell of the global problem is a ghost cell; so, all must be shifted by dx to the left. if (x > (@nx_g()-1)*dx) x = x - @nx_g()*dx; end # It must not be (nx_g()-1)*dx as the distance between the local problems (1*dx) must also be taken into account! if (x < 0) x = x + @nx_g()*dx; end # ... end return x end """ y_g(iy, dy, A) Return the global y-coordinate for the element `iy` in the local array `A` (`dy` is the space step between the elements). # Examples ```jldoctest julia> using ImplicitGlobalGrid julia> ly=4; nx=3; ny=3; nz=3; julia> init_global_grid(nx, ny, nz); Global grid: 3x3x3 (nprocs: 1, dims: 1x1x1) julia> dy = ly/(ny_g()-1) 2.0 julia> A = zeros(nx,ny,nz); julia> Vy = zeros(nx,ny+1,nz); julia> [y_g(iy, dy, A) for iy=1:size(A, 1)] 3-element Vector{Float64}: 0.0 2.0 4.0 julia> [y_g(iy, dy, Vy) for iy=1:size(Vy, 2)] 4-element Vector{Float64}: -1.0 1.0 3.0 5.0 julia> finalize_global_grid() ``` """ function y_g(iy::Integer, dy::GGNumber, A::GGArray)::GGNumber y0 = 0.5*(@ny()-size(A,2))*dy; y = (@coordy()*(@ny()-@oly()) + iy-1)*dy + y0; if @periody() y = y - dy; if (y > (@ny_g()-1)*dy) y = y - @ny_g()*dy; end if (y < 0) y = y + @ny_g()*dy; end end return y end """ z_g(iz, dz, A) Return the global z-coordinate for the element `iz` in the local array `A` (`dz` is the space step between the elements). # Examples ```jldoctest julia> using ImplicitGlobalGrid julia> lz=4; nx=3; ny=3; nz=3; julia> init_global_grid(nx, ny, nz); Global grid: 3x3x3 (nprocs: 1, dims: 1x1x1) julia> dz = lz/(nz_g()-1) 2.0 julia> A = zeros(nx,ny,nz); julia> Vz = zeros(nx,ny,nz+1); julia> [z_g(iz, dz, A) for iz=1:size(A, 1)] 3-element Vector{Float64}: 0.0 2.0 4.0 julia> [z_g(iz, dz, Vz) for iz=1:size(Vz, 3)] 4-element Vector{Float64}: -1.0 1.0 3.0 5.0 julia> finalize_global_grid() ``` """ function z_g(iz::Integer, dz::GGNumber, A::GGArray)::GGNumber z0 = 0.5*(@nz()-size(A,3))*dz; z = (@coordz()*(@nz()-@olz()) + iz-1)*dz + z0; if @periodz() z = z - dz; if (z > (@nz_g()-1)*dz) z = z - @nz_g()*dz; end if (z < 0) z = z + @nz_g()*dz; end end return z end # Timing tools. @doc """ tic() Start chronometer once all processes have reached this point. !!! warning The chronometer may currently add an overhead of multiple 10th of miliseconds at the first usage. See also: [`toc`](@ref) """ tic @doc """ toc() Return the elapsed time from chronometer (since the last call to `tic`) when all processes have reached this point. !!! warning The chronometer may currently add an overhead of multiple 10th of miliseconds at the first usage. See also: [`tic`](@ref) """ toc let global tic, toc t0 = nothing tic() = ( check_initialized(); MPI.Barrier(comm()); t0 = time() ) toc() = ( check_initialized(); MPI.Barrier(comm()); time() - t0 ) end
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
26791
export update_halo! """ update_halo!(A) update_halo!(A...) !!! note "Advanced" update_halo!(A, B, (A=C, halowidths=..., (A=D, halowidths=...), ...) Update the halo of the given GPU/CPU-array(s). # Typical use cases: update_halo!(A) # Update the halo of the array A. update_halo!(A, B, C) # Update the halos of the arrays A, B and C. update_halo!(A, B, (A=C, halowidths=(2,2,2))) # Update the halos of the arrays A, B, C, defining non default halowidth for C. !!! note "Performance note" Group subsequent calls to `update_halo!` in a single call for better performance (enables additional pipelining). !!! note "Performance note" If the system supports CUDA-aware MPI (for Nvidia GPUs) or ROCm-aware MPI (for AMD GPUs), it may be activated for ImplicitGlobalGrid by setting one of the following environment variables (at latest before the call to `init_global_grid`): ```shell shell> export IGG_CUDAAWARE_MPI=1 ``` ```shell shell> export IGG_ROCMAWARE_MPI=1 ``` """ function update_halo!(A::Union{GGArray, GGField, GGFieldConvertible}...; dims=(NDIMS_MPI,(1:NDIMS_MPI-1)...)) check_initialized(); fields = wrap_field.(A); check_fields(fields...); _update_halo!(fields...; dims=dims); # Assignment of A to fields in the internal function _update_halo!() as vararg A can consist of multiple fields; A will be used for a single field in the following (The args of update_halo! must however be "A..." for maximal simplicity and elegance for the user). return nothing end # function _update_halo!(fields::GGField...; dims=dims) if (!cuda_enabled() && !amdgpu_enabled() && !all_arrays(fields...)) error("not all arrays are CPU arrays, but no GPU extension is loaded.") end #NOTE: in the following, it is only required to check for `cuda_enabled()`/`amdgpu_enabled()` when the context does not imply `any_cuarray(fields...)` or `is_cuarray(A)` or the corresponding for AMDGPU. # NOTE: the case where only one of the two extensions are loaded, but an array dad would be for the other extension is passed is very unlikely and therefore not explicitly checked here (but could be added later). allocate_bufs(fields...); if any_array(fields...) allocate_tasks(fields...); end if any_cuarray(fields...) allocate_custreams(fields...); end if any_rocarray(fields...) allocate_rocstreams(fields...); end for dim in dims # NOTE: this works for 1D-3D (e.g. if nx>1, ny>1 and nz=1, then for d=3, there will be no neighbors, i.e. nothing will be done as desired...). for ns = 1:NNEIGHBORS_PER_DIM, i = 1:length(fields) if has_neighbor(ns, dim) iwrite_sendbufs!(ns, dim, fields[i], i); end end # Send / receive if the neighbors are other processes (usual case). reqs = fill(MPI.REQUEST_NULL, length(fields), NNEIGHBORS_PER_DIM, 2); if all(neighbors(dim) .!= me()) # Note: handling of send/recv to itself requires special configurations for some MPI implementations (e.g. self BTL must be activated with OpenMPI); so we handle this case without MPI to avoid this complication. for nr = NNEIGHBORS_PER_DIM:-1:1, i = 1:length(fields) # Note: if there were indeed more than 2 neighbors per dimension; then one would need to make sure which neigbour would communicate with which. if has_neighbor(nr, dim) reqs[i,nr,1] = irecv_halo!(nr, dim, fields[i], i); end end for ns = 1:NNEIGHBORS_PER_DIM, i = 1:length(fields) if has_neighbor(ns, dim) wait_iwrite(ns, fields[i], i); # Right before starting to send, make sure that the data of neighbor ns and field i has finished writing to the sendbuffer. reqs[i,ns,2] = isend_halo(ns, dim, fields[i], i); end end # Copy if I am my own neighbors (when periodic boundary and only one process in this dimension). elseif all(neighbors(dim) .== me()) for ns = 1:NNEIGHBORS_PER_DIM, i = 1:length(fields) wait_iwrite(ns, fields[i], i); # Right before starting to send, make sure that the data of neighbor ns and field i has finished writing to the sendbuffer. sendrecv_halo_local(ns, dim, fields[i], i); nr = NNEIGHBORS_PER_DIM - ns + 1; iread_recvbufs!(nr, dim, fields[i], i); end else error("Incoherent neighbors in dimension $dim: either all neighbors must equal to me, or none.") end for nr = NNEIGHBORS_PER_DIM:-1:1, i = 1:length(fields) # Note: if there were indeed more than 2 neighbors per dimension; then one would need to make sure which neigbour would communicate with which. if (reqs[i,nr,1]!=MPI.REQUEST_NULL) MPI.Wait!(reqs[i,nr,1]); end if (has_neighbor(nr, dim) && neighbor(nr, dim)!=me()) iread_recvbufs!(nr, dim, fields[i], i); end # Note: if neighbor(nr,dim) != me() is done directly in the sendrecv_halo_local loop above for better performance (thanks to pipelining) end for nr = NNEIGHBORS_PER_DIM:-1:1, i = 1:length(fields) # Note: if there were indeed more than 2 neighbors per dimension; then one would need to make sure which neigbour would communicate with which. if has_neighbor(nr, dim) wait_iread(nr, fields[i], i); end end for ns = 1:NNEIGHBORS_PER_DIM if (any(reqs[:,ns,2].!=[MPI.REQUEST_NULL])) MPI.Waitall!(reqs[:,ns,2]); end end end end ##--------------------------- ## FUNCTIONS FOR SYNTAX SUGAR halosize(dim::Integer, A::GGField) = (dim==1) ? (A.halowidths[1], size(A,2), size(A,3)) : ((dim==2) ? (size(A,1), A.halowidths[2], size(A,3)) : (size(A,1), size(A,2), A.halowidths[3])) ##--------------------------------------- ## FUNCTIONS RELATED TO BUFFER ALLOCATION # NOTE: CUDA and AMDGPU buffers live and are dealt with independently, enabling the support of usage of CUDA and AMD GPUs at the same time. let #TODO: this was: global free_update_halo_buffers, allocate_bufs, sendbuf, recvbuf, sendbuf_flat, recvbuf_flat, gpusendbuf, gpurecvbuf, gpusendbuf_flat, gpurecvbuf_flat, rocsendbuf, rocrecvbuf, rocsendbuf_flat, rocrecvbuf_flat global free_update_halo_buffers, allocate_bufs, sendbuf, recvbuf, sendbuf_flat, recvbuf_flat sendbufs_raw = nothing recvbufs_raw = nothing function free_update_halo_buffers() free_update_halo_cpubuffers() if (cuda_enabled() && none(cudaaware_MPI())) free_update_halo_cubuffers() end if (amdgpu_enabled() && none(amdgpuaware_MPI())) free_update_halo_rocbuffers() end GC.gc() #TODO: see how to modify this! end function free_update_halo_cpubuffers() reset_cpu_buffers(); end function reset_cpu_buffers() sendbufs_raw = nothing recvbufs_raw = nothing end # Allocate for each field two send and recv buffers (one for the left and one for the right neighbour of a dimension). The required length of the buffer is given by the maximal number of halo elements in any of the dimensions. Note that buffers are not allocated separately for each dimension, as the updates are performed one dimension at a time (required for correctness). function allocate_bufs(fields::GGField{T}...) where T <: GGNumber if (isnothing(sendbufs_raw) || isnothing(recvbufs_raw)) free_update_halo_buffers(); init_bufs_arrays(); if cuda_enabled() init_cubufs_arrays(); end if amdgpu_enabled() init_rocbufs_arrays(); end end init_bufs(T, fields...); if cuda_enabled() init_cubufs(T, fields...); end if amdgpu_enabled() init_rocbufs(T, fields...); end for i = 1:length(fields) A, halowidths = fields[i]; for n = 1:NNEIGHBORS_PER_DIM # Ensure that the buffers are interpreted to contain elements of the same type as the array. reinterpret_bufs(T, i, n); if cuda_enabled() reinterpret_cubufs(T, i, n); end if amdgpu_enabled() reinterpret_rocbufs(T, i, n); end end max_halo_elems = maximum((size(A,1)*size(A,2)*halowidths[3], size(A,1)*size(A,3)*halowidths[2], size(A,2)*size(A,3)*halowidths[1])); reallocate_undersized_hostbufs(T, i, max_halo_elems, A); if (is_cuarray(A) && any(cudaaware_MPI())) reallocate_undersized_cubufs(T, i, max_halo_elems) end if (is_rocarray(A) && any(amdgpuaware_MPI())) reallocate_undersized_rocbufs(T, i, max_halo_elems) end end end # (CPU functions) function init_bufs_arrays() sendbufs_raw = Array{Array{Any,1},1}(); recvbufs_raw = Array{Array{Any,1},1}(); end function init_bufs(T::DataType, fields::GGField...) while (length(sendbufs_raw) < length(fields)) push!(sendbufs_raw, [zeros(T,0), zeros(T,0)]); end while (length(recvbufs_raw) < length(fields)) push!(recvbufs_raw, [zeros(T,0), zeros(T,0)]); end end function reinterpret_bufs(T::DataType, i::Integer, n::Integer) if (eltype(sendbufs_raw[i][n]) != T) sendbufs_raw[i][n] = reinterpret(T, sendbufs_raw[i][n]); end if (eltype(recvbufs_raw[i][n]) != T) recvbufs_raw[i][n] = reinterpret(T, recvbufs_raw[i][n]); end end function reallocate_undersized_hostbufs(T::DataType, i::Integer, max_halo_elems::Integer, A::GGArray) if (length(sendbufs_raw[i][1]) < max_halo_elems) for n = 1:NNEIGHBORS_PER_DIM reallocate_bufs(T, i, n, max_halo_elems); if (is_cuarray(A) && none(cudaaware_MPI())) reregister_cubufs(T, i, n, sendbufs_raw, recvbufs_raw); end # Host memory is page-locked (and mapped to device memory) to ensure optimal access performance (from kernel or with 3-D memcopy). if (is_rocarray(A) && none(amdgpuaware_MPI())) reregister_rocbufs(T, i, n, sendbufs_raw, recvbufs_raw); end # ... end GC.gc(); # Too small buffers had been replaced with larger ones; free the now unused memory. end end function reallocate_bufs(T::DataType, i::Integer, n::Integer, max_halo_elems::Integer) sendbufs_raw[i][n] = zeros(T, Int(ceil(max_halo_elems/GG_ALLOC_GRANULARITY))*GG_ALLOC_GRANULARITY); # Ensure that the amount of allocated memory is a multiple of 4*sizeof(T) (sizeof(Float64)/sizeof(Float16) = 4). So, we can always correctly reinterpret the raw buffers even if next time sizeof(T) is greater. recvbufs_raw[i][n] = zeros(T, Int(ceil(max_halo_elems/GG_ALLOC_GRANULARITY))*GG_ALLOC_GRANULARITY); end # (CPU functions) function sendbuf_flat(n::Integer, dim::Integer, i::Integer, A::GGField{T}) where T <: GGNumber return view(sendbufs_raw[i][n]::AbstractVector{T},1:prod(halosize(dim,A))); end function recvbuf_flat(n::Integer, dim::Integer, i::Integer, A::GGField{T}) where T <: GGNumber return view(recvbufs_raw[i][n]::AbstractVector{T},1:prod(halosize(dim,A))); end function sendbuf(n::Integer, dim::Integer, i::Integer, A::GGField) return reshape(sendbuf_flat(n,dim,i,A), halosize(dim,A)); end function recvbuf(n::Integer, dim::Integer, i::Integer, A::GGField) return reshape(recvbuf_flat(n,dim,i,A), halosize(dim,A)); end # Make sendbufs_raw and recvbufs_raw accessible for unit testing. global get_sendbufs_raw, get_recvbufs_raw get_sendbufs_raw() = deepcopy(sendbufs_raw) get_recvbufs_raw() = deepcopy(recvbufs_raw) end ##---------------------------------------------- ## FUNCTIONS TO WRITE AND READ SEND/RECV BUFFERS # NOTE: the tasks, custreams and rocqueues are stored here in a let clause to have them survive the end of a call to update_boundaries. This avoids the creation of new tasks and cuda streams every time. Besides, that this could be relevant for performance, it is important for debugging the overlapping the communication with computation (if at every call new stream/task objects are created this becomes very messy and hard to analyse). # (CPU functions) function allocate_tasks(fields::GGField...) allocate_tasks_iwrite(fields...); allocate_tasks_iread(fields...); end let global iwrite_sendbufs!, allocate_tasks_iwrite, wait_iwrite tasks = Array{Task}(undef, NNEIGHBORS_PER_DIM, 0); wait_iwrite(n::Integer, A::CPUField{T}, i::Integer) where T <: GGNumber = (schedule(tasks[n,i]); wait(tasks[n,i]);) # The argument A is used for multiple dispatch. #NOTE: The current implementation only starts a task when it is waited for, in order to make sure that only one task is run at a time and that they are run in the desired order (best for performance as the tasks are mapped only to one thread via context switching). function allocate_tasks_iwrite(fields::GGField...) if length(fields) > size(tasks,2) # Note: for simplicity, we create a tasks for every field even if it is not an CPUField tasks = [tasks Array{Task}(undef, NNEIGHBORS_PER_DIM, length(fields)-size(tasks,2))]; # Create (additional) emtpy tasks. end end function iwrite_sendbufs!(n::Integer, dim::Integer, F::CPUField{T}, i::Integer) where T <: GGNumber # Function to be called if A is a CPUField. A, halowidths = F; tasks[n,i] = @task begin if ol(dim,A) >= 2*halowidths[dim] # There is only a halo and thus a halo update if the overlap is at least 2 times the halowidth... write_h2h!(sendbuf(n,dim,i,F), A, sendranges(n,dim,F), dim); end end end # Make tasks accessible for unit testing. global get_tasks_iwrite get_tasks_iwrite() = deepcopy(tasks) end let global iread_recvbufs!, allocate_tasks_iread, wait_iread tasks = Array{Task}(undef, NNEIGHBORS_PER_DIM, 0); wait_iread(n::Integer, A::CPUField{T}, i::Integer) where T <: GGNumber = (schedule(tasks[n,i]); wait(tasks[n,i]);) #NOTE: The current implementation only starts a task when it is waited for, in order to make sure that only one task is run at a time and that they are run in the desired order (best for performance currently as the tasks are mapped only to one thread via context switching). function allocate_tasks_iread(fields::GGField...) if length(fields) > size(tasks,2) # Note: for simplicity, we create a tasks for every field even if it is not an Array tasks = [tasks Array{Task}(undef, NNEIGHBORS_PER_DIM, length(fields)-size(tasks,2))]; # Create (additional) emtpy tasks. end end function iread_recvbufs!(n::Integer, dim::Integer, F::CPUField{T}, i::Integer) where T <: GGNumber A, halowidths = F; tasks[n,i] = @task begin if ol(dim,A) >= 2*halowidths[dim] # There is only a halo and thus a halo update if the overlap is at least 2 times the halowidth... read_h2h!(recvbuf(n,dim,i,F), A, recvranges(n,dim,F), dim); end end end # Make tasks accessible for unit testing. global get_tasks_iread get_tasks_iread() = deepcopy(tasks) end # (CPU/GPU functions) # Return the ranges from A to be sent. It will always return ranges for the dimensions x,y and z even if the A is 1D or 2D (for 2D, the 3rd range is 1:1; for 1D, the 2nd and 3rd range are 1:1). function sendranges(n::Integer, dim::Integer, F::GGField) A, halowidths = F; if (ol(dim, A) < 2*halowidths[dim]) error("Incoherent arguments: ol(A,dim)<2*halowidths[dim]."); end if (n==2) ixyz_dim = size(A, dim) - (ol(dim, A) - 1); elseif (n==1) ixyz_dim = 1 + (ol(dim, A) - halowidths[dim]); end sendranges = [1:size(A,1), 1:size(A,2), 1:size(A,3)]; # Initialize with the ranges of A. sendranges[dim] = ixyz_dim:ixyz_dim+halowidths[dim]-1; return sendranges end # Return the ranges from A to be received. It will always return ranges for the dimensions x,y and z even if the A is 1D or 2D (for 2D, the 3rd range is 1:1; for 1D, the 2nd and 3rd range are 1:1). function recvranges(n::Integer, dim::Integer, F::GGField) A, halowidths = F; if (ol(dim, A) < 2*halowidths[dim]) error("Incoherent arguments: ol(A,dim)<2*halowidths[dim]."); end if (n==2) ixyz_dim = size(A, dim) - (halowidths[dim] - 1); elseif (n==1) ixyz_dim = 1; end recvranges = [1:size(A,1), 1:size(A,2), 1:size(A,3)]; # Initialize with the ranges of A. recvranges[dim] = ixyz_dim:ixyz_dim+halowidths[dim]-1; return recvranges end # (CPU functions) # Write to the send buffer on the host from the array on the host (h2h). Note: it works for 1D-3D, as sendranges contains always 3 ranges independently of the number of dimensions of A (see function sendranges). function write_h2h!(sendbuf::AbstractArray{T}, A::Array{T}, sendranges::Array{UnitRange{T2},1}, dim::Integer) where T <: GGNumber where T2 <: Integer ix = (length(sendranges[1])==1) ? sendranges[1][1] : sendranges[1]; iy = (length(sendranges[2])==1) ? sendranges[2][1] : sendranges[2]; iz = (length(sendranges[3])==1) ? sendranges[3][1] : sendranges[3]; if (length(ix)==1 && iy == 1:size(A,2) && iz == 1:size(A,3) && !use_polyester(dim)) memcopy!(view(sendbuf, 1, :, :), view(A,ix, :, :), use_polyester(dim)); elseif (length(ix)==1 && length(iy)==1 && iz == 1:size(A,3) && !use_polyester(dim)) memcopy!(view(sendbuf, 1, 1, :), view(A,ix,iy, :), use_polyester(dim)); elseif (length(ix)==1 && iy == 1:size(A,2) && length(iz)==1 && !use_polyester(dim)) memcopy!(view(sendbuf, 1, :, 1), view(A,ix, :,iz), use_polyester(dim)); elseif (length(ix)==1 && length(iy)==1 && length(iz)==1 && !use_polyester(dim)) memcopy!(view(sendbuf, 1, 1, 1), view(A,ix,iy,iz), use_polyester(dim)); elseif (ix == 1:size(A,1) && length(iy)==1 && iz == 1:size(A,3) ) memcopy!(view(sendbuf, :, 1, :), view(A, :,iy, :), use_polyester(dim)); elseif (ix == 1:size(A,1) && length(iy)==1 && length(iz)==1 ) memcopy!(view(sendbuf, :, 1, 1), view(A, :,iy,iz), use_polyester(dim)); elseif (ix == 1:size(A,1) && iy == 1:size(A,2) && length(iz)==1 ) memcopy!(view(sendbuf, :, :, 1), view(A, :, :,iz), use_polyester(dim)); else memcopy!(sendbuf, view(A,sendranges...), use_polyester(dim)); # This general case is slower than the optimised cases above (the result would be the same, of course). end end # Read from the receive buffer on the host and store on the array on the host (h2h). Note: it works for 1D-3D, as recvranges contains always 3 ranges independently of the number of dimensions of A (see function recvranges). function read_h2h!(recvbuf::AbstractArray{T}, A::Array{T}, recvranges::Array{UnitRange{T2},1}, dim::Integer) where T <: GGNumber where T2 <: Integer ix = (length(recvranges[1])==1) ? recvranges[1][1] : recvranges[1]; iy = (length(recvranges[2])==1) ? recvranges[2][1] : recvranges[2]; iz = (length(recvranges[3])==1) ? recvranges[3][1] : recvranges[3]; if (length(ix)==1 && iy == 1:size(A,2) && iz == 1:size(A,3) && !use_polyester(dim)) memcopy!(view(A,ix, :, :), view(recvbuf, 1, :, :), use_polyester(dim)); elseif (length(ix)==1 && length(iy)==1 && iz == 1:size(A,3) && !use_polyester(dim)) memcopy!(view(A,ix,iy, :), view(recvbuf, 1, 1, :), use_polyester(dim)); elseif (length(ix)==1 && iy == 1:size(A,2) && length(iz)==1 && !use_polyester(dim)) memcopy!(view(A,ix, :,iz), view(recvbuf, 1, :, 1), use_polyester(dim)); elseif (length(ix)==1 && length(iy)==1 && length(iz)==1 && !use_polyester(dim)) memcopy!(view(A,ix,iy,iz), view(recvbuf, 1, 1, 1), use_polyester(dim)); elseif (ix == 1:size(A,1) && length(iy)==1 && iz == 1:size(A,3) ) memcopy!(view(A, :,iy, :), view(recvbuf, :, 1, :), use_polyester(dim)); elseif (ix == 1:size(A,1) && length(iy)==1 && length(iz)==1 ) memcopy!(view(A, :,iy,iz), view(recvbuf, :, 1, 1), use_polyester(dim)); elseif (ix == 1:size(A,1) && iy == 1:size(A,2) && length(iz)==1 ) memcopy!(view(A, :, :,iz), view(recvbuf, :, :, 1), use_polyester(dim)); else memcopy!(view(A,recvranges...), recvbuf, use_polyester(dim)); # This general case is slower than the optimised cases above (the result would be the same, of course). end end ##------------------------------ ## FUNCTIONS TO SEND/RECV FIELDS function irecv_halo!(n::Integer, dim::Integer, F::GGField, i::Integer; tag::Integer=0) req = MPI.REQUEST_NULL; A, halowidths = F; if ol(dim,A) >= 2*halowidths[dim] # There is only a halo and thus a halo update if the overlap is at least 2 times the halowidth... if (cudaaware_MPI(dim) && is_cuarray(A)) || (amdgpuaware_MPI(dim) && is_rocarray(A)) req = MPI.Irecv!(gpurecvbuf_flat(n,dim,i,F), neighbor(n,dim), tag, comm()); else req = MPI.Irecv!(recvbuf_flat(n,dim,i,F), neighbor(n,dim), tag, comm()); end end return req end function isend_halo(n::Integer, dim::Integer, F::GGField, i::Integer; tag::Integer=0) req = MPI.REQUEST_NULL; A, halowidths = F; if ol(dim,A) >= 2*halowidths[dim] # There is only a halo and thus a halo update if the overlap is at least 2 times the halowidth... if (cudaaware_MPI(dim) && is_cuarray(A)) || (amdgpuaware_MPI(dim) && is_rocarray(A)) req = MPI.Isend(gpusendbuf_flat(n,dim,i,F), neighbor(n,dim), tag, comm()); else req = MPI.Isend(sendbuf_flat(n,dim,i,F), neighbor(n,dim), tag, comm()); end end return req end function sendrecv_halo_local(n::Integer, dim::Integer, F::GGField, i::Integer) A, halowidths = F; if ol(dim,A) >= 2*halowidths[dim] # There is only a halo and thus a halo update if the overlap is at least 2 times the halowidth... if (cudaaware_MPI(dim) && is_cuarray(A)) || (amdgpuaware_MPI(dim) && is_rocarray(A)) if n == 1 gpumemcopy!(gpurecvbuf_flat(2,dim,i,F), gpusendbuf_flat(1,dim,i,F)); elseif n == 2 gpumemcopy!(gpurecvbuf_flat(1,dim,i,F), gpusendbuf_flat(2,dim,i,F)); end else if n == 1 memcopy!(recvbuf_flat(2,dim,i,F), sendbuf_flat(1,dim,i,F), use_polyester(dim)); elseif n == 2 memcopy!(recvbuf_flat(1,dim,i,F), sendbuf_flat(2,dim,i,F), use_polyester(dim)); end end end end function memcopy!(dst::AbstractArray{T}, src::AbstractArray{T}, use_polyester::Bool) where T <: GGNumber if use_polyester && nthreads() > 1 && length(src) > 1 && !(T <: Complex) # NOTE: Polyester does not yet support Complex numbers and copy reinterpreted arrays leads to bad performance. memcopy_polyester!(dst, src) else dst_flat = view(dst,:) src_flat = view(src,:) memcopy_threads!(dst_flat, src_flat) end end # (CPU functions) function memcopy_threads!(dst::AbstractArray{T}, src::AbstractArray{T}) where T <: GGNumber if nthreads() > 1 && sizeof(src) >= GG_THREADCOPY_THRESHOLD @threads for i = 1:length(dst) # NOTE: Set the number of threads e.g. as: export JULIA_NUM_THREADS=12 @inbounds dst[i] = src[i] # NOTE: We fix here exceptionally the use of @inbounds as this copy between two flat vectors (which must have the right length) is considered safe. end else @inbounds copyto!(dst, src) end end ##------------------------------------------- ## FUNCTIONS FOR CHECKING THE INPUT ARGUMENTS # NOTE: no comparison must be done between the field-local halowidths and field-local overlaps because any combination is valid: the rational is that a field has simply no halo but only computation overlap in a given dimension if the corresponding local overlap is less than 2 times the local halowidth. This allows to determine whether a halo update needs to be done in a certain dimension or not. function check_fields(fields::GGField...) # Raise an error if any of the given fields has a halowidth less than 1. invalid_halowidths = [i for i=1:length(fields) if any([fields[i].halowidths[dim]<1 for dim=1:ndims(fields[i])])]; if length(invalid_halowidths) > 1 error("The fields at positions $(join(invalid_halowidths,", "," and ")) have a halowidth less than 1.") elseif length(invalid_halowidths) > 0 error("The field at position $(invalid_halowidths[1]) has a halowidth less than 1.") end # Raise an error if any of the given fields has no halo at all (in any dimension) - in this case there is no halo update to do and including the field in the call is inconsistent. no_halo = Int[]; for i = 1:length(fields) A, halowidths = fields[i] if all([(ol(dim, A) < 2*halowidths[dim]) for dim = 1:ndims(A)]) # There is no halo if the overlap is less than 2 times the halowidth (only computation overlap in this case)... push!(no_halo, i); end end if length(no_halo) > 1 error("The fields at positions $(join(no_halo,", "," and ")) have no halo; remove them from the call.") elseif length(no_halo) > 0 error("The field at position $(no_halo[1]) has no halo; remove it from the call.") end # Raise an error if any of the given fields contains any duplicates. duplicates = [[i,j] for i=1:length(fields) for j=i+1:length(fields) if fields[i].A===fields[j].A]; if length(duplicates) > 2 error("The pairs of fields with the positions $(join(duplicates,", "," and ")) are the same; remove any duplicates from the call.") elseif length(duplicates) > 0 error("The field at position $(duplicates[1][2]) is a duplicate of the one at the position $(duplicates[1][1]); remove the duplicate from the call.") end # Raise an error if not all fields are of the same datatype (restriction comes from buffer handling). different_types = [i for i=2:length(fields) if typeof(fields[i].A)!=typeof(fields[1].A)]; if length(different_types) > 1 error("The fields at positions $(join(different_types,", "," and ")) are of different type than the first field; make sure that in a same call all fields are of the same type.") elseif length(different_types) == 1 error("The field at position $(different_types[1]) is of different type than the first field; make sure that in a same call all fields are of the same type.") end end
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
444
# shared.jl is_rocarray(A::GGArray) = false # select_device.jl function nb_rocdevices end function rocdevice! end # update_halo.jl function free_update_halo_rocbuffers end function init_rocbufs_arrays end function init_rocbufs end function reinterpret_rocbufs end function reallocate_undersized_rocbufs end function reregister_rocbufs end function get_rocsendbufs_raw end function get_rocrecvbufs_raw end function allocate_rocstreams end
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
139
ImplicitGlobalGrid.nb_rocdevices() = length(AMDGPU.devices()) ImplicitGlobalGrid.rocdevice!(device_id) = AMDGPU.device_id!(device_id)
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
1693
import ImplicitGlobalGrid import ImplicitGlobalGrid: GGArray, GGField, GGNumber, halosize, ol, amdgpuaware_MPI, sendranges, recvranges, sendbuf_flat, recvbuf_flat, write_d2x!, read_x2d!, write_d2h_async!, read_h2d_async!, register, is_rocarray import ImplicitGlobalGrid: NNEIGHBORS_PER_DIM, GG_ALLOC_GRANULARITY using AMDGPU ##------ ## TYPES const ROCField{T,N} = GGField{T,N,ROCArray{T,N}} ##------------------------------------ ## HANDLING OF CUDA AND AMDGPU SUPPORT ImplicitGlobalGrid.is_loaded(::Val{:ImplicitGlobalGrid_AMDGPUExt}) = true ImplicitGlobalGrid.is_functional(::Val{:AMDGPU}) = AMDGPU.functional() ##------------- ## SYNTAX SUGAR ImplicitGlobalGrid.is_rocarray(A::ROCArray) = true #NOTE: this function is only to be used when multiple dispatch on the type of the array seems an overkill (in particular when only something needs to be done for the GPU case, but nothing for the CPU case) and as long as performance does not suffer. ##-------------------------------------------------------------------------------- ## FUNCTIONS FOR WRAPPING ARRAYS AND FIELDS AND DEFINE ARRAY PROPERTY BASE METHODS ImplicitGlobalGrid.wrap_field(A::ROCArray, hw::Tuple) = ROCField{eltype(A), ndims(A)}((A, hw)) Base.size(A::ROCField) = Base.size(A.A) Base.size(A::ROCField, args...) = Base.size(A.A, args...) Base.length(A::ROCField) = Base.length(A.A) Base.ndims(A::ROCField) = Base.ndims(A.A) Base.eltype(A::ROCField) = Base.eltype(A.A) ##--------------- ## AMDGPU functions function ImplicitGlobalGrid.register(::Type{<:ROCArray},buf::Array{T}) where T <: GGNumber return unsafe_wrap(ROCArray, pointer(buf), size(buf)) end
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
14158
##--------------------------------------- ## FUNCTIONS RELATED TO BUFFER ALLOCATION # NOTE: CUDA and AMDGPU buffers live and are dealt with independently, enabling the support of usage of CUDA and AMD GPUs at the same time. ImplicitGlobalGrid.free_update_halo_rocbuffers(args...) = free_update_halo_rocbuffers(args...) ImplicitGlobalGrid.init_rocbufs_arrays(args...) = init_rocbufs_arrays(args...) ImplicitGlobalGrid.init_rocbufs(args...) = init_rocbufs(args...) ImplicitGlobalGrid.reinterpret_rocbufs(args...) = reinterpret_rocbufs(args...) ImplicitGlobalGrid.reallocate_undersized_rocbufs(args...) = reallocate_undersized_rocbufs(args...) ImplicitGlobalGrid.reregister_rocbufs(args...) = reregister_rocbufs(args...) ImplicitGlobalGrid.get_rocsendbufs_raw(args...) = get_rocsendbufs_raw(args...) ImplicitGlobalGrid.get_rocrecvbufs_raw(args...) = get_rocrecvbufs_raw(args...) ImplicitGlobalGrid.gpusendbuf(n::Integer, dim::Integer, i::Integer, A::ROCField{T}) where {T <: GGNumber} = gpusendbuf(n,dim,i,A) ImplicitGlobalGrid.gpurecvbuf(n::Integer, dim::Integer, i::Integer, A::ROCField{T}) where {T <: GGNumber} = gpurecvbuf(n,dim,i,A) ImplicitGlobalGrid.gpusendbuf_flat(n::Integer, dim::Integer, i::Integer, A::ROCField{T}) where {T <: GGNumber} = gpusendbuf_flat(n,dim,i,A) ImplicitGlobalGrid.gpurecvbuf_flat(n::Integer, dim::Integer, i::Integer, A::ROCField{T}) where {T <: GGNumber} = gpurecvbuf_flat(n,dim,i,A) let global free_update_halo_rocbuffers, init_rocbufs_arrays, init_rocbufs, reinterpret_rocbufs, reregister_rocbufs, reallocate_undersized_rocbufs global gpusendbuf, gpurecvbuf, gpusendbuf_flat, gpurecvbuf_flat rocsendbufs_raw = nothing rocrecvbufs_raw = nothing # INFO: no need for roc host buffers function free_update_halo_rocbuffers() free_rocbufs(rocsendbufs_raw) free_rocbufs(rocrecvbufs_raw) # INFO: no need for roc host buffers reset_roc_buffers() end function free_rocbufs(bufs) if (bufs !== nothing) for i = 1:length(bufs) for n = 1:length(bufs[i]) if is_rocarray(bufs[i][n]) AMDGPU.unsafe_free!(bufs[i][n]); bufs[i][n] = []; end # DEBUG: unsafe_free should be managed in AMDGPU end end end end # INFO: no need for roc host buffers # function unregister_rocbufs(bufs) # end function reset_roc_buffers() rocsendbufs_raw = nothing rocrecvbufs_raw = nothing # INFO: no need for roc host buffers end # (AMDGPU functions) function init_rocbufs_arrays() rocsendbufs_raw = Array{Array{Any,1},1}(); rocrecvbufs_raw = Array{Array{Any,1},1}(); # INFO: no need for roc host buffers end function init_rocbufs(T::DataType, fields::GGField...) while (length(rocsendbufs_raw) < length(fields)) push!(rocsendbufs_raw, [ROCArray{T}(undef,0), ROCArray{T}(undef,0)]); end while (length(rocrecvbufs_raw) < length(fields)) push!(rocrecvbufs_raw, [ROCArray{T}(undef,0), ROCArray{T}(undef,0)]); end # INFO: no need for roc host buffers end function reinterpret_rocbufs(T::DataType, i::Integer, n::Integer) if (eltype(rocsendbufs_raw[i][n]) != T) rocsendbufs_raw[i][n] = reinterpret(T, rocsendbufs_raw[i][n]); end if (eltype(rocrecvbufs_raw[i][n]) != T) rocrecvbufs_raw[i][n] = reinterpret(T, rocrecvbufs_raw[i][n]); end end function reallocate_undersized_rocbufs(T::DataType, i::Integer, max_halo_elems::Integer) if (!isnothing(rocsendbufs_raw) && length(rocsendbufs_raw[i][1]) < max_halo_elems) for n = 1:NNEIGHBORS_PER_DIM reallocate_rocbufs(T, i, n, max_halo_elems); GC.gc(); # Too small buffers had been replaced with larger ones; free the unused memory immediately. end end end function reallocate_rocbufs(T::DataType, i::Integer, n::Integer, max_halo_elems::Integer) rocsendbufs_raw[i][n] = AMDGPU.zeros(T, Int(ceil(max_halo_elems/GG_ALLOC_GRANULARITY))*GG_ALLOC_GRANULARITY); # Ensure that the amount of allocated memory is a multiple of 4*sizeof(T) (sizeof(Float64)/sizeof(Float16) = 4). So, we can always correctly reinterpret the raw buffers even if next time sizeof(T) is greater. rocrecvbufs_raw[i][n] = AMDGPU.zeros(T, Int(ceil(max_halo_elems/GG_ALLOC_GRANULARITY))*GG_ALLOC_GRANULARITY); end function reregister_rocbufs(T::DataType, i::Integer, n::Integer, sendbufs_raw, recvbufs_raw) # INFO: no need for roc host buffers rocsendbufs_raw[i][n] = register(ROCArray,sendbufs_raw[i][n]); rocrecvbufs_raw[i][n] = register(ROCArray,recvbufs_raw[i][n]); end # (AMDGPU functions) function gpusendbuf_flat(n::Integer, dim::Integer, i::Integer, A::ROCField{T}) where T <: GGNumber return view(rocsendbufs_raw[i][n]::ROCVector{T},1:prod(halosize(dim,A))); end function gpurecvbuf_flat(n::Integer, dim::Integer, i::Integer, A::ROCField{T}) where T <: GGNumber return view(rocrecvbufs_raw[i][n]::ROCVector{T},1:prod(halosize(dim,A))); end # (GPU functions) #TODO: see if remove T here and in other cases for CuArray, ROCArray or Array (but then it does not verify that CuArray/ROCArray is of type GGNumber) or if I should instead change GGArray to GGArrayUnion and create: GGArray = Array{T} where T <: GGNumber and GGCuArray = CuArray{T} where T <: GGNumber; This is however more difficult to read and understand for others. function gpusendbuf(n::Integer, dim::Integer, i::Integer, A::ROCField{T}) where T <: GGNumber return reshape(gpusendbuf_flat(n,dim,i,A), halosize(dim,A)); end function gpurecvbuf(n::Integer, dim::Integer, i::Integer, A::ROCField{T}) where T <: GGNumber return reshape(gpurecvbuf_flat(n,dim,i,A), halosize(dim,A)); end # Make sendbufs_raw and recvbufs_raw accessible for unit testing. global get_rocsendbufs_raw, get_rocrecvbufs_raw get_rocsendbufs_raw() = deepcopy(rocsendbufs_raw) get_rocrecvbufs_raw() = deepcopy(rocrecvbufs_raw) end ##---------------------------------------------- ## FUNCTIONS TO WRITE AND READ SEND/RECV BUFFERS function ImplicitGlobalGrid.allocate_rocstreams(fields::GGField...) allocate_rocstreams_iwrite(fields...); allocate_rocstreams_iread(fields...); end ImplicitGlobalGrid.iwrite_sendbufs!(n::Integer, dim::Integer, F::ROCField{T}, i::Integer) where {T <: GGNumber} = iwrite_sendbufs!(n,dim,F,i) ImplicitGlobalGrid.iread_recvbufs!(n::Integer, dim::Integer, F::ROCField{T}, i::Integer) where {T <: GGNumber} = iread_recvbufs!(n,dim,F,i) ImplicitGlobalGrid.wait_iwrite(n::Integer, A::ROCField{T}, i::Integer) where {T <: GGNumber} = wait_iwrite(n,A,i) ImplicitGlobalGrid.wait_iread(n::Integer, A::ROCField{T}, i::Integer) where {T <: GGNumber} = wait_iread(n,A,i) let global iwrite_sendbufs!, allocate_rocstreams_iwrite, wait_iwrite rocstreams = Array{AMDGPU.HIPStream}(undef, NNEIGHBORS_PER_DIM, 0) wait_iwrite(n::Integer, A::ROCField{T}, i::Integer) where T <: GGNumber = AMDGPU.synchronize(rocstreams[n,i]; blocking=true); function allocate_rocstreams_iwrite(fields::GGField...) if length(fields) > size(rocstreams,2) # Note: for simplicity, we create a stream for every field even if it is not a ROCField rocstreams = [rocstreams [AMDGPU.HIPStream(:high) for n=1:NNEIGHBORS_PER_DIM, i=1:(length(fields)-size(rocstreams,2))]]; # Create (additional) maximum priority nonblocking streams to enable overlap with computation kernels. end end function iwrite_sendbufs!(n::Integer, dim::Integer, F::ROCField{T}, i::Integer) where T <: GGNumber A, halowidths = F; if ol(dim,A) >= 2*halowidths[dim] # There is only a halo and thus a halo update if the overlap is at least 2 times the halowidth... # DEBUG: the follow section needs perf testing # DEBUG 2: commenting read_h2d_async! for now # if dim == 1 || amdgpuaware_MPI(dim) # Use a custom copy kernel for the first dimension to obtain a good copy performance (the CUDA 3-D memcopy does not perform well for this extremely strided case). ranges = sendranges(n, dim, F); nthreads = (dim==1) ? (1, 32, 1) : (32, 1, 1); halosize = [r[end] - r[1] + 1 for r in ranges]; nblocks = Tuple(ceil.(Int, halosize./nthreads)); @roc gridsize=nblocks groupsize=nthreads stream=rocstreams[n,i] write_d2x!(gpusendbuf(n,dim,i,F), A, ranges[1], ranges[2], ranges[3], dim); # else # write_d2h_async!(sendbuf_flat(n,dim,i,F), A, sendranges(n,dim,F), rocstreams[n,i]); # end end end end let global iread_recvbufs!, allocate_rocstreams_iread, wait_iread rocstreams = Array{AMDGPU.HIPStream}(undef, NNEIGHBORS_PER_DIM, 0) wait_iread(n::Integer, A::ROCField{T}, i::Integer) where T <: GGNumber = AMDGPU.synchronize(rocstreams[n,i]; blocking=true); function allocate_rocstreams_iread(fields::GGField...) if length(fields) > size(rocstreams,2) # Note: for simplicity, we create a stream for every field even if it is not a ROCField rocstreams = [rocstreams [AMDGPU.HIPStream(:high) for n=1:NNEIGHBORS_PER_DIM, i=1:(length(fields)-size(rocstreams,2))]]; # Create (additional) maximum priority nonblocking streams to enable overlap with computation kernels. end end function iread_recvbufs!(n::Integer, dim::Integer, F::ROCField{T}, i::Integer) where T <: GGNumber A, halowidths = F; if ol(dim,A) >= 2*halowidths[dim] # There is only a halo and thus a halo update if the overlap is at least 2 times the halowidth... # DEBUG: the follow section needs perf testing # DEBUG 2: commenting read_h2d_async! for now # if dim == 1 || amdgpuaware_MPI(dim) # Use a custom copy kernel for the first dimension to obtain a good copy performance (the CUDA 3-D memcopy does not perform well for this extremely strided case). ranges = recvranges(n, dim, F); nthreads = (dim==1) ? (1, 32, 1) : (32, 1, 1); halosize = [r[end] - r[1] + 1 for r in ranges]; nblocks = Tuple(ceil.(Int, halosize./nthreads)); @roc gridsize=nblocks groupsize=nthreads stream=rocstreams[n,i] read_x2d!(gpurecvbuf(n,dim,i,F), A, ranges[1], ranges[2], ranges[3], dim); # else # read_h2d_async!(recvbuf_flat(n,dim,i,F), A, recvranges(n,dim,F), rocstreams[n,i]); # end end end end # (AMDGPU functions) # Write to the send buffer on the host or device from the array on the device (d2x). function ImplicitGlobalGrid.write_d2x!(gpusendbuf::ROCDeviceArray{T}, A::ROCDeviceArray{T}, sendrangex::UnitRange{Int64}, sendrangey::UnitRange{Int64}, sendrangez::UnitRange{Int64}, dim::Integer) where T <: GGNumber ix = (AMDGPU.workgroupIdx().x-1) * AMDGPU.workgroupDim().x + AMDGPU.workitemIdx().x + sendrangex[1] - 1 iy = (AMDGPU.workgroupIdx().y-1) * AMDGPU.workgroupDim().y + AMDGPU.workitemIdx().y + sendrangey[1] - 1 iz = (AMDGPU.workgroupIdx().z-1) * AMDGPU.workgroupDim().z + AMDGPU.workitemIdx().z + sendrangez[1] - 1 if !(ix in sendrangex && iy in sendrangey && iz in sendrangez) return nothing; end gpusendbuf[ix-(sendrangex[1]-1),iy-(sendrangey[1]-1),iz-(sendrangez[1]-1)] = A[ix,iy,iz]; return nothing end # Read from the receive buffer on the host or device and store on the array on the device (x2d). function ImplicitGlobalGrid.read_x2d!(gpurecvbuf::ROCDeviceArray{T}, A::ROCDeviceArray{T}, recvrangex::UnitRange{Int64}, recvrangey::UnitRange{Int64}, recvrangez::UnitRange{Int64}, dim::Integer) where T <: GGNumber ix = (AMDGPU.workgroupIdx().x-1) * AMDGPU.workgroupDim().x + AMDGPU.workitemIdx().x + recvrangex[1] - 1 iy = (AMDGPU.workgroupIdx().y-1) * AMDGPU.workgroupDim().y + AMDGPU.workitemIdx().y + recvrangey[1] - 1 iz = (AMDGPU.workgroupIdx().z-1) * AMDGPU.workgroupDim().z + AMDGPU.workitemIdx().z + recvrangez[1] - 1 if !(ix in recvrangex && iy in recvrangey && iz in recvrangez) return nothing; end A[ix,iy,iz] = gpurecvbuf[ix-(recvrangex[1]-1),iy-(recvrangey[1]-1),iz-(recvrangez[1]-1)]; return nothing end # Write to the send buffer on the host from the array on the device (d2h). function ImplicitGlobalGrid.write_d2h_async!(sendbuf::AbstractArray{T}, A::ROCArray{T}, sendranges::Array{UnitRange{T2},1}, rocstream::AMDGPU.HIPStream) where T <: GGNumber where T2 <: Integer buf_view = reshape(sendbuf, Tuple(length.(sendranges))) AMDGPU.Mem.unsafe_copy3d!( pointer(sendbuf), AMDGPU.Mem.HostBuffer, pointer(A), typeof(A.buf), length(sendranges[1]), length(sendranges[2]), length(sendranges[3]); srcPos=(sendranges[1][1], sendranges[2][1], sendranges[3][1]), dstPitch=sizeof(T) * size(buf_view, 1), dstHeight=size(buf_view, 2), srcPitch=sizeof(T) * size(A, 1), srcHeight=size(A, 2), async=true, stream=rocstream ) return nothing end # Read from the receive buffer on the host and store on the array on the device (h2d). function ImplicitGlobalGrid.read_h2d_async!(recvbuf::AbstractArray{T}, A::ROCArray{T}, recvranges::Array{UnitRange{T2},1}, rocstream::AMDGPU.HIPStream) where T <: GGNumber where T2 <: Integer buf_view = reshape(recvbuf, Tuple(length.(recvranges))) AMDGPU.Mem.unsafe_copy3d!( pointer(A), typeof(A.buf), pointer(recvbuf), AMDGPU.Mem.HostBuffer, length(recvranges[1]), length(recvranges[2]), length(recvranges[3]); dstPos=(recvranges[1][1], recvranges[2][1], recvranges[3][1]), dstPitch=sizeof(T) * size(A, 1), dstHeight=size(A, 2), srcPitch=sizeof(T) * size(buf_view, 1), srcHeight=size(buf_view, 2), async=true, stream=rocstream ) return nothing end ##------------------------------ ## FUNCTIONS TO SEND/RECV FIELDS function ImplicitGlobalGrid.gpumemcopy!(dst::ROCArray{T}, src::ROCArray{T}) where T <: GGNumber @inbounds AMDGPU.copyto!(dst, src) end
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
432
# shared.jl is_cuarray(A::GGArray) = false # select_device.jl function nb_cudevices end function cudevice! end # update_halo.jl function free_update_halo_cubuffers end function init_cubufs_arrays end function init_cubufs end function reinterpret_cubufs end function reallocate_undersized_cubufs end function reregister_cubufs end function get_cusendbufs_raw end function get_curecvbufs_raw end function allocate_custreams end
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
130
ImplicitGlobalGrid.nb_cudevices() = length(CUDA.devices()) ImplicitGlobalGrid.cudevice!(device_id) = CUDA.device!(device_id)
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
1813
import ImplicitGlobalGrid import ImplicitGlobalGrid: GGArray, GGField, GGNumber, halosize, ol, cudaaware_MPI, sendranges, recvranges, sendbuf_flat, recvbuf_flat, write_d2x!, read_x2d!, write_d2h_async!, read_h2d_async!, register, is_cuarray import ImplicitGlobalGrid: NNEIGHBORS_PER_DIM, GG_ALLOC_GRANULARITY using CUDA ##------ ## TYPES const CuField{T,N} = GGField{T,N,CuArray{T,N}} ##------------------------------------ ## HANDLING OF CUDA AND AMDGPU SUPPORT ImplicitGlobalGrid.is_loaded(::Val{:ImplicitGlobalGrid_CUDAExt}) = true ImplicitGlobalGrid.is_functional(::Val{:CUDA}) = CUDA.functional() ##------------- ## SYNTAX SUGAR ImplicitGlobalGrid.is_cuarray(A::CuArray) = true #NOTE: this function is only to be used when multiple dispatch on the type of the array seems an overkill (in particular when only something needs to be done for the GPU case, but nothing for the CPU case) and as long as performance does not suffer. ##-------------------------------------------------------------------------------- ## FUNCTIONS FOR WRAPPING ARRAYS AND FIELDS AND DEFINE ARRAY PROPERTY BASE METHODS ImplicitGlobalGrid.wrap_field(A::CuArray, hw::Tuple) = CuField{eltype(A), ndims(A)}((A, hw)) Base.size(A::CuField) = Base.size(A.A) Base.size(A::CuField, args...) = Base.size(A.A, args...) Base.length(A::CuField) = Base.length(A.A) Base.ndims(A::CuField) = Base.ndims(A.A) Base.eltype(A::CuField) = Base.eltype(A.A) ##--------------- ## CUDA functions function ImplicitGlobalGrid.register(::Type{<:CuArray},buf::Array{T}) where T <: GGNumber rbuf = CUDA.Mem.register(CUDA.Mem.Host, pointer(buf), sizeof(buf), CUDA.Mem.HOSTREGISTER_DEVICEMAP); rbuf_d = convert(CuPtr{T}, rbuf); return unsafe_wrap(CuArray, rbuf_d, size(buf)), rbuf; end
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
14490
##--------------------------------------- ## FUNCTIONS RELATED TO BUFFER ALLOCATION # NOTE: CUDA and AMDGPU buffers live and are dealt with independently, enabling the support of usage of CUDA and AMD GPUs at the same time. ImplicitGlobalGrid.free_update_halo_cubuffers(args...) = free_update_halo_cubuffers(args...) ImplicitGlobalGrid.init_cubufs_arrays(args...) = init_cubufs_arrays(args...) ImplicitGlobalGrid.init_cubufs(args...) = init_cubufs(args...) ImplicitGlobalGrid.reinterpret_cubufs(args...) = reinterpret_cubufs(args...) ImplicitGlobalGrid.reallocate_undersized_cubufs(args...) = reallocate_undersized_cubufs(args...) ImplicitGlobalGrid.reregister_cubufs(args...) = reregister_cubufs(args...) ImplicitGlobalGrid.get_cusendbufs_raw(args...) = get_cusendbufs_raw(args...) ImplicitGlobalGrid.get_curecvbufs_raw(args...) = get_curecvbufs_raw(args...) ImplicitGlobalGrid.gpusendbuf(n::Integer, dim::Integer, i::Integer, A::CuField{T}) where {T <: GGNumber} = gpusendbuf(n,dim,i,A) ImplicitGlobalGrid.gpurecvbuf(n::Integer, dim::Integer, i::Integer, A::CuField{T}) where {T <: GGNumber} = gpurecvbuf(n,dim,i,A) ImplicitGlobalGrid.gpusendbuf_flat(n::Integer, dim::Integer, i::Integer, A::CuField{T}) where {T <: GGNumber} = gpusendbuf_flat(n,dim,i,A) ImplicitGlobalGrid.gpurecvbuf_flat(n::Integer, dim::Integer, i::Integer, A::CuField{T}) where {T <: GGNumber} = gpurecvbuf_flat(n,dim,i,A) let global free_update_halo_cubuffers, init_cubufs_arrays, init_cubufs, reinterpret_cubufs, reregister_cubufs, reallocate_undersized_cubufs global gpusendbuf, gpurecvbuf, gpusendbuf_flat, gpurecvbuf_flat cusendbufs_raw = nothing curecvbufs_raw = nothing cusendbufs_raw_h = nothing curecvbufs_raw_h = nothing function free_update_halo_cubuffers() free_cubufs(cusendbufs_raw) free_cubufs(curecvbufs_raw) unregister_cubufs(cusendbufs_raw_h) unregister_cubufs(curecvbufs_raw_h) reset_cu_buffers() end function free_cubufs(bufs) if (bufs !== nothing) for i = 1:length(bufs) for n = 1:length(bufs[i]) if is_cuarray(bufs[i][n]) CUDA.unsafe_free!(bufs[i][n]); bufs[i][n] = []; end end end end end function unregister_cubufs(bufs) if (bufs !== nothing) for i = 1:length(bufs) for n = 1:length(bufs[i]) if (isa(bufs[i][n],CUDA.Mem.HostBuffer)) CUDA.Mem.unregister(bufs[i][n]); bufs[i][n] = []; end end end end end function reset_cu_buffers() cusendbufs_raw = nothing curecvbufs_raw = nothing cusendbufs_raw_h = nothing curecvbufs_raw_h = nothing end # (CUDA functions) function init_cubufs_arrays() cusendbufs_raw = Array{Array{Any,1},1}(); curecvbufs_raw = Array{Array{Any,1},1}(); cusendbufs_raw_h = Array{Array{Any,1},1}(); curecvbufs_raw_h = Array{Array{Any,1},1}(); end function init_cubufs(T::DataType, fields::GGField...) while (length(cusendbufs_raw) < length(fields)) push!(cusendbufs_raw, [CuArray{T}(undef,0), CuArray{T}(undef,0)]); end while (length(curecvbufs_raw) < length(fields)) push!(curecvbufs_raw, [CuArray{T}(undef,0), CuArray{T}(undef,0)]); end while (length(cusendbufs_raw_h) < length(fields)) push!(cusendbufs_raw_h, [[], []]); end while (length(curecvbufs_raw_h) < length(fields)) push!(curecvbufs_raw_h, [[], []]); end end function reinterpret_cubufs(T::DataType, i::Integer, n::Integer) if (eltype(cusendbufs_raw[i][n]) != T) cusendbufs_raw[i][n] = reinterpret(T, cusendbufs_raw[i][n]); end if (eltype(curecvbufs_raw[i][n]) != T) curecvbufs_raw[i][n] = reinterpret(T, curecvbufs_raw[i][n]); end end function reallocate_undersized_cubufs(T::DataType, i::Integer, max_halo_elems::Integer) if (!isnothing(cusendbufs_raw) && length(cusendbufs_raw[i][1]) < max_halo_elems) for n = 1:NNEIGHBORS_PER_DIM reallocate_cubufs(T, i, n, max_halo_elems); GC.gc(); # Too small buffers had been replaced with larger ones; free the unused memory immediately. end end end function reallocate_cubufs(T::DataType, i::Integer, n::Integer, max_halo_elems::Integer) cusendbufs_raw[i][n] = CUDA.zeros(T, Int(ceil(max_halo_elems/GG_ALLOC_GRANULARITY))*GG_ALLOC_GRANULARITY); # Ensure that the amount of allocated memory is a multiple of 4*sizeof(T) (sizeof(Float64)/sizeof(Float16) = 4). So, we can always correctly reinterpret the raw buffers even if next time sizeof(T) is greater. curecvbufs_raw[i][n] = CUDA.zeros(T, Int(ceil(max_halo_elems/GG_ALLOC_GRANULARITY))*GG_ALLOC_GRANULARITY); end function reregister_cubufs(T::DataType, i::Integer, n::Integer, sendbufs_raw, recvbufs_raw) if (isa(cusendbufs_raw_h[i][n],CUDA.Mem.HostBuffer)) CUDA.Mem.unregister(cusendbufs_raw_h[i][n]); cusendbufs_raw_h[i][n] = []; end # It is always initialized registered... if (cusendbufs_raw_h[i][n].bytesize > 32*sizeof(T)) if (isa(curecvbufs_raw_h[i][n],CUDA.Mem.HostBuffer)) CUDA.Mem.unregister(curecvbufs_raw_h[i][n]); curecvbufs_raw_h[i][n] = []; end # It is always initialized registered... if (curecvbufs_raw_h[i][n].bytesize > 32*sizeof(T)) cusendbufs_raw[i][n], cusendbufs_raw_h[i][n] = register(CuArray,sendbufs_raw[i][n]); curecvbufs_raw[i][n], curecvbufs_raw_h[i][n] = register(CuArray,recvbufs_raw[i][n]); end # (CUDA functions) function gpusendbuf_flat(n::Integer, dim::Integer, i::Integer, A::CuField{T}) where T <: GGNumber return view(cusendbufs_raw[i][n]::CuVector{T},1:prod(halosize(dim,A))); end function gpurecvbuf_flat(n::Integer, dim::Integer, i::Integer, A::CuField{T}) where T <: GGNumber return view(curecvbufs_raw[i][n]::CuVector{T},1:prod(halosize(dim,A))); end # (GPU functions) #TODO: see if remove T here and in other cases for CuArray, ROCArray or Array (but then it does not verify that CuArray/ROCArray is of type GGNumber) or if I should instead change GGArray to GGArrayUnion and create: GGArray = Array{T} where T <: GGNumber and GGCuArray = CuArray{T} where T <: GGNumber; This is however more difficult to read and understand for others. function gpusendbuf(n::Integer, dim::Integer, i::Integer, A::CuField{T}) where T <: GGNumber return reshape(gpusendbuf_flat(n,dim,i,A), halosize(dim,A)); end function gpurecvbuf(n::Integer, dim::Integer, i::Integer, A::CuField{T}) where T <: GGNumber return reshape(gpurecvbuf_flat(n,dim,i,A), halosize(dim,A)); end # Make sendbufs_raw and recvbufs_raw accessible for unit testing. global get_cusendbufs_raw, get_curecvbufs_raw get_cusendbufs_raw() = deepcopy(cusendbufs_raw) get_curecvbufs_raw() = deepcopy(curecvbufs_raw) end ##---------------------------------------------- ## FUNCTIONS TO WRITE AND READ SEND/RECV BUFFERS function ImplicitGlobalGrid.allocate_custreams(fields::GGField...) allocate_custreams_iwrite(fields...); allocate_custreams_iread(fields...); end ImplicitGlobalGrid.iwrite_sendbufs!(n::Integer, dim::Integer, F::CuField{T}, i::Integer) where {T <: GGNumber} = iwrite_sendbufs!(n,dim,F,i) ImplicitGlobalGrid.iread_recvbufs!(n::Integer, dim::Integer, F::CuField{T}, i::Integer) where {T <: GGNumber} = iread_recvbufs!(n,dim,F,i) ImplicitGlobalGrid.wait_iwrite(n::Integer, A::CuField{T}, i::Integer) where {T <: GGNumber} = wait_iwrite(n,A,i) ImplicitGlobalGrid.wait_iread(n::Integer, A::CuField{T}, i::Integer) where {T <: GGNumber} = wait_iread(n,A,i) let global iwrite_sendbufs!, allocate_custreams_iwrite, wait_iwrite custreams = Array{CuStream}(undef, NNEIGHBORS_PER_DIM, 0) wait_iwrite(n::Integer, A::CuField{T}, i::Integer) where T <: GGNumber = CUDA.synchronize(custreams[n,i]; blocking=true); function allocate_custreams_iwrite(fields::GGField...) if length(fields) > size(custreams,2) # Note: for simplicity, we create a stream for every field even if it is not a CuField custreams = [custreams [CuStream(; flags=CUDA.STREAM_NON_BLOCKING, priority=CUDA.priority_range()[end]) for n=1:NNEIGHBORS_PER_DIM, i=1:(length(fields)-size(custreams,2))]]; # Create (additional) maximum priority nonblocking streams to enable overlap with computation kernels. end end function iwrite_sendbufs!(n::Integer, dim::Integer, F::CuField{T}, i::Integer) where T <: GGNumber A, halowidths = F; if ol(dim,A) >= 2*halowidths[dim] # There is only a halo and thus a halo update if the overlap is at least 2 times the halowidth... if dim == 1 || cudaaware_MPI(dim) # Use a custom copy kernel for the first dimension to obtain a good copy performance (the CUDA 3-D memcopy does not perform well for this extremely strided case). ranges = sendranges(n, dim, F); nthreads = (dim==1) ? (1, 32, 1) : (32, 1, 1); halosize = [r[end] - r[1] + 1 for r in ranges]; nblocks = Tuple(ceil.(Int, halosize./nthreads)); @cuda blocks=nblocks threads=nthreads stream=custreams[n,i] write_d2x!(gpusendbuf(n,dim,i,F), A, ranges[1], ranges[2], ranges[3], dim); else write_d2h_async!(sendbuf_flat(n,dim,i,F), A, sendranges(n,dim,F), custreams[n,i]); end end end end let global iread_recvbufs!, allocate_custreams_iread, wait_iread custreams = Array{CuStream}(undef, NNEIGHBORS_PER_DIM, 0) wait_iread(n::Integer, A::CuField{T}, i::Integer) where T <: GGNumber = CUDA.synchronize(custreams[n,i]; blocking=true); function allocate_custreams_iread(fields::GGField...) if length(fields) > size(custreams,2) # Note: for simplicity, we create a stream for every field even if it is not a CuField custreams = [custreams [CuStream(; flags=CUDA.STREAM_NON_BLOCKING, priority=CUDA.priority_range()[end]) for n=1:NNEIGHBORS_PER_DIM, i=1:(length(fields)-size(custreams,2))]]; # Create (additional) maximum priority nonblocking streams to enable overlap with computation kernels. end end function iread_recvbufs!(n::Integer, dim::Integer, F::CuField{T}, i::Integer) where T <: GGNumber A, halowidths = F; if ol(dim,A) >= 2*halowidths[dim] # There is only a halo and thus a halo update if the overlap is at least 2 times the halowidth... if dim == 1 || cudaaware_MPI(dim) # Use a custom copy kernel for the first dimension to obtain a good copy performance (the CUDA 3-D memcopy does not perform well for this extremely strided case). ranges = recvranges(n, dim, F); nthreads = (dim==1) ? (1, 32, 1) : (32, 1, 1); halosize = [r[end] - r[1] + 1 for r in ranges]; nblocks = Tuple(ceil.(Int, halosize./nthreads)); @cuda blocks=nblocks threads=nthreads stream=custreams[n,i] read_x2d!(gpurecvbuf(n,dim,i,F), A, ranges[1], ranges[2], ranges[3], dim); else read_h2d_async!(recvbuf_flat(n,dim,i,F), A, recvranges(n,dim,F), custreams[n,i]); end end end end # (CUDA functions) # Write to the send buffer on the host or device from the array on the device (d2x). function ImplicitGlobalGrid.write_d2x!(gpusendbuf::CuDeviceArray{T}, A::CuDeviceArray{T}, sendrangex::UnitRange{Int64}, sendrangey::UnitRange{Int64}, sendrangez::UnitRange{Int64}, dim::Integer) where T <: GGNumber ix = (CUDA.blockIdx().x-1) * CUDA.blockDim().x + CUDA.threadIdx().x + sendrangex[1] - 1 iy = (CUDA.blockIdx().y-1) * CUDA.blockDim().y + CUDA.threadIdx().y + sendrangey[1] - 1 iz = (CUDA.blockIdx().z-1) * CUDA.blockDim().z + CUDA.threadIdx().z + sendrangez[1] - 1 if !(ix in sendrangex && iy in sendrangey && iz in sendrangez) return nothing; end gpusendbuf[ix-(sendrangex[1]-1),iy-(sendrangey[1]-1),iz-(sendrangez[1]-1)] = A[ix,iy,iz]; return nothing end # Read from the receive buffer on the host or device and store on the array on the device (x2d). function ImplicitGlobalGrid.read_x2d!(gpurecvbuf::CuDeviceArray{T}, A::CuDeviceArray{T}, recvrangex::UnitRange{Int64}, recvrangey::UnitRange{Int64}, recvrangez::UnitRange{Int64}, dim::Integer) where T <: GGNumber ix = (CUDA.blockIdx().x-1) * CUDA.blockDim().x + CUDA.threadIdx().x + recvrangex[1] - 1 iy = (CUDA.blockIdx().y-1) * CUDA.blockDim().y + CUDA.threadIdx().y + recvrangey[1] - 1 iz = (CUDA.blockIdx().z-1) * CUDA.blockDim().z + CUDA.threadIdx().z + recvrangez[1] - 1 if !(ix in recvrangex && iy in recvrangey && iz in recvrangez) return nothing; end A[ix,iy,iz] = gpurecvbuf[ix-(recvrangex[1]-1),iy-(recvrangey[1]-1),iz-(recvrangez[1]-1)]; return nothing end # Write to the send buffer on the host from the array on the device (d2h). function ImplicitGlobalGrid.write_d2h_async!(sendbuf::AbstractArray{T}, A::CuArray{T}, sendranges::Array{UnitRange{T2},1}, custream::CuStream) where T <: GGNumber where T2 <: Integer CUDA.Mem.unsafe_copy3d!( pointer(sendbuf), CUDA.Mem.Host, pointer(A), CUDA.Mem.Device, length(sendranges[1]), length(sendranges[2]), length(sendranges[3]); srcPos=(sendranges[1][1], sendranges[2][1], sendranges[3][1]), srcPitch=sizeof(T)*size(A,1), srcHeight=size(A,2), dstPitch=sizeof(T)*length(sendranges[1]), dstHeight=length(sendranges[2]), async=true, stream=custream ) end # Read from the receive buffer on the host and store on the array on the device (h2d). function ImplicitGlobalGrid.read_h2d_async!(recvbuf::AbstractArray{T}, A::CuArray{T}, recvranges::Array{UnitRange{T2},1}, custream::CuStream) where T <: GGNumber where T2 <: Integer CUDA.Mem.unsafe_copy3d!( pointer(A), CUDA.Mem.Device, pointer(recvbuf), CUDA.Mem.Host, length(recvranges[1]), length(recvranges[2]), length(recvranges[3]); dstPos=(recvranges[1][1], recvranges[2][1], recvranges[3][1]), srcPitch=sizeof(T)*length(recvranges[1]), srcHeight=length(recvranges[2]), dstPitch=sizeof(T)*size(A,1), dstHeight=size(A,2), async=true, stream=custream ) end ##------------------------------ ## FUNCTIONS TO SEND/RECV FIELDS function ImplicitGlobalGrid.gpumemcopy!(dst::CuArray{T}, src::CuArray{T}) where T <: GGNumber @inbounds CUDA.copyto!(dst, src) end
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
871
import ImplicitGlobalGrid import ImplicitGlobalGrid: GGNumber using Polyester function ImplicitGlobalGrid.memcopy_polyester!(dst::AbstractArray{T}, src::AbstractArray{T}) where T <: GGNumber @batch for i ∈ eachindex(dst, src) # NOTE: @batch will use maximally Threads.nthreads() threads / #cores threads. Set the number of threads e.g. as: export JULIA_NUM_THREADS=12. NOTE on previous implementation with LoopVectorization: tturbo fails if src_flat and dst_flat are used due to an issue in ArrayInterface : https://github.com/JuliaArrays/ArrayInterface.jl/issues/228 TODO: once the package has matured check again if there is any benefit with: per=core stride=true @inbounds dst[i] = src[i] # NOTE: We fix here exceptionally the use of @inbounds as this copy between two flat vectors (which must have the right length) is considered safe. end end
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
224
const ERRMSG_EXTENSION_NOT_LOADED = "PolyesterExt: the Polyester extension was not loaded. Make sure to import Polyester before ImplicitGlobalGrid." memcopy_polyester!(args...) = @NotLoadedError(ERRMSG_EXTENSION_NOT_LOADED)
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
1777
# NOTE: This file contains many parts that are copied from the file runtests.jl from the Package MPI.jl. push!(LOAD_PATH, "../src") # FIXME: to be removed everywhere? import ImplicitGlobalGrid # Precompile it. import ImplicitGlobalGrid: SUPPORTED_DEVICE_TYPES, DEVICE_TYPE_CUDA, DEVICE_TYPE_AMDGPU @static if (DEVICE_TYPE_CUDA in SUPPORTED_DEVICE_TYPES) import CUDA end @static if (DEVICE_TYPE_AMDGPU in SUPPORTED_DEVICE_TYPES) import AMDGPU end excludedfiles = ["test_excluded.jl"]; function runtests() exename = joinpath(Sys.BINDIR, Base.julia_exename()) testdir = pwd() istest(f) = endswith(f, ".jl") && startswith(basename(f), "test_") testfiles = sort(filter(istest, vcat([joinpath.(root, files) for (root, dirs, files) in walkdir(testdir)]...))) nfail = 0 printstyled("Testing package ImplicitGlobalGrid.jl\n"; bold=true, color=:white) if (DEVICE_TYPE_CUDA in SUPPORTED_DEVICE_TYPES && !CUDA.functional()) @warn "Test Skip: All CUDA tests will be skipped because CUDA is not functional (if this is unexpected type `import CUDA; CUDA.functional(true)` to debug your CUDA installation)." end if (DEVICE_TYPE_AMDGPU in SUPPORTED_DEVICE_TYPES && !AMDGPU.functional()) @warn "Test Skip: All AMDGPU tests will be skipped because AMDGPU is not functional (if this is unexpected type `import AMDGPU; AMDGPU.functional()` to debug your AMDGPU installation)." end for f in testfiles println("") if f ∈ excludedfiles println("Test Skip:") println("$f") continue end try run(`$exename -O3 --startup-file=no $(joinpath(testdir, f))`) catch ex nfail += 1 end end return nfail end exit(runtests())
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
666
push!(LOAD_PATH, "../src") using Test import MPI, CUDA, AMDGPU using ImplicitGlobalGrid; GG = ImplicitGlobalGrid import ImplicitGlobalGrid: @require @testset "$(basename(@__FILE__))" begin @testset "1. finalization of global grid and MPI" begin init_global_grid(4, 4, 4, quiet=true); # NOTE: these tests can run with any number of processes. @require GG.grid_is_initialized() @require !MPI.Finalized() finalize_global_grid() @test !GG.grid_is_initialized() end; @testset "2. exceptions" begin @test_throws ErrorException finalize_global_grid(); # Finalize can never be before initialize. end; end;
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
8196
push!(LOAD_PATH, "../src") using Test import MPI, CUDA, AMDGPU using ImplicitGlobalGrid; GG = ImplicitGlobalGrid import ImplicitGlobalGrid: @require ## Test setup MPI.Init(); nprocs = MPI.Comm_size(MPI.COMM_WORLD); # NOTE: these tests can run with any number of processes. nx = 7; ny = 5; nz = 6; dx = 1.0 dy = 1.0 dz = 1.0 @testset "$(basename(@__FILE__)) (processes: $nprocs)" begin @testset "1. argument check" begin @testset "sizes" begin me, dims = init_global_grid(nx, ny, nz, quiet=true, init_MPI=false); A = zeros(nx); B = zeros(nx, ny); C = zeros(nx, ny, nz); A_g = zeros(nx*dims[1]+1); B_g = zeros(nx*dims[1], ny*dims[2]-1); C_g = zeros(nx*dims[1], ny*dims[2], nz*dims[3]+2); if (me == 0) @test_throws ErrorException gather!(A, A_g) end # Error: A_g is not product of size(A) and dims (1D) if (me == 0) @test_throws ErrorException gather!(B, B_g) end # Error: B_g is not product of size(A) and dims (2D) if (me == 0) @test_throws ErrorException gather!(C, C_g) end # Error: C_g is not product of size(A) and dims (3D) if (me == 0) @test_throws ErrorException gather!(C, nothing) end # Error: global is nothing finalize_global_grid(finalize_MPI=false); end; end; @testset "2. gather!" begin @testset "1D" begin me, dims = init_global_grid(nx, 1, 1, overlaps=(0,0,0), quiet=true, init_MPI=false); P = zeros(nx); P_g = zeros(nx*dims[1]); P .= [x_g(ix,dx,P) for ix=1:size(P,1)]; P_g_ref = [x_g(ix,dx,P_g) for ix=1:size(P_g,1)]; P_g_ref .= -P_g_ref[1] .+ P_g_ref; # NOTE: We add the first value of P_g_ref to have it start at 0.0. gather!(P, P_g); if (me == 0) @test all(P_g .== P_g_ref) end finalize_global_grid(finalize_MPI=false); end; @testset "2D" begin me, dims = init_global_grid(nx, ny, 1, overlaps=(0,0,0), quiet=true, init_MPI=false); P = zeros(nx, ny); P_g = zeros(nx*dims[1], ny*dims[2]); P .= [y_g(iy,dy,P)*1e1 + x_g(ix,dx,P) for ix=1:size(P,1), iy=1:size(P,2)]; P_g_ref = [y_g(iy,dy,P_g)*1e1 + x_g(ix,dx,P_g) for ix=1:size(P_g,1), iy=1:size(P_g,2)]; P_g_ref .= -P_g_ref[1,1] .+ P_g_ref; # NOTE: We add the first value of P_g_ref to have it start at 0.0. gather!(P, P_g); if (me == 0) @test all(P_g .== P_g_ref) end finalize_global_grid(finalize_MPI=false); end; @testset "3D" begin me, dims = init_global_grid(nx, ny, nz, overlaps=(0,0,0), quiet=true, init_MPI=false); P = zeros(nx, ny, nz); P_g = zeros(nx*dims[1], ny*dims[2], nz*dims[3]); P .= [z_g(iz,dz,P)*1e2 + y_g(iy,dy,P)*1e1 + x_g(ix,dx,P) for ix=1:size(P,1), iy=1:size(P,2), iz=1:size(P,3)]; P_g_ref = [z_g(iz,dz,P_g)*1e2 + y_g(iy,dy,P_g)*1e1 + x_g(ix,dx,P_g) for ix=1:size(P_g,1), iy=1:size(P_g,2), iz=1:size(P_g,3)]; P_g_ref .= -P_g_ref[1,1,1] .+ P_g_ref; # NOTE: We add the first value of P_g_ref to have it start at 0.0. gather!(P, P_g); if (me == 0) @test all(P_g .== P_g_ref) end finalize_global_grid(finalize_MPI=false); end; @testset "1D, then larger 3D, then smaller 2D" begin me, dims = init_global_grid(nx, ny, nz, overlaps=(0,0,0), quiet=true, init_MPI=false); # (1D) P = zeros(nx); P_g = zeros(nx*dims[1], dims[2], dims[3]); P .= [x_g(ix,dx,P) for ix=1:size(P,1)]; P_g_ref = [x_g(ix,dx,P_g) for ix=1:size(P_g,1), iy=1:size(P_g,2), iz=1:size(P_g,3)]; P_g_ref .= -P_g_ref[1,1,1] .+ P_g_ref; # NOTE: We add the first value of P_g_ref to have it start at 0.0. gather!(P, P_g); if (me == 0) @test all(P_g .== P_g_ref) end # (3D) P = zeros(nx, ny, nz); P_g = zeros(nx*dims[1], ny*dims[2], nz*dims[3]); P .= [z_g(iz,dz,P)*1e2 + y_g(iy,dy,P)*1e1 + x_g(ix,dx,P) for ix=1:size(P,1), iy=1:size(P,2), iz=1:size(P,3)]; P_g_ref = [z_g(iz,dz,P_g)*1e2 + y_g(iy,dy,P_g)*1e1 + x_g(ix,dx,P_g) for ix=1:size(P_g,1), iy=1:size(P_g,2), iz=1:size(P_g,3)]; P_g_ref .= -P_g_ref[1,1,1] .+ P_g_ref; # NOTE: We add the first value of P_g_ref to have it start at 0.0. gather!(P, P_g); if (me == 0) @test all(P_g .== P_g_ref) end # (2D) P = zeros(nx, ny); P_g = zeros(nx*dims[1], ny*dims[2], dims[3]); P .= [y_g(iy,dy,P)*1e1 + x_g(ix,dx,P) for ix=1:size(P,1), iy=1:size(P,2)]; P_g_ref = [y_g(iy,dy,P_g)*1e1 + x_g(ix,dx,P_g) for ix=1:size(P_g,1), iy=1:size(P_g,2), iz=1:size(P,3)]; P_g_ref .= -P_g_ref[1,1,1] .+ P_g_ref; # NOTE: We add the first value of P_g_ref to have it start at 0.0. gather!(P, P_g); if (me == 0) @test all(P_g .== P_g_ref) end finalize_global_grid(finalize_MPI=false); end; @testset "Float32, then Float64, then Int16" begin me, dims = init_global_grid(nx, ny, nz, overlaps=(0,0,0), quiet=true, init_MPI=false); # Float32 (1D) P = zeros(Float32, nx); P_g = zeros(Float32, nx*dims[1], dims[2], dims[3]); P .= [x_g(ix,dx,P) for ix=1:size(P,1)]; P_g_ref = [x_g(ix,dx,P_g) for ix=1:size(P_g,1), iy=1:size(P_g,2), iz=1:size(P_g,3)]; P_g_ref .= -P_g_ref[1,1,1] .+ P_g_ref; # NOTE: We add the first value of P_g_ref to have it start at 0.0. gather!(P, P_g); if (me == 0) @test all(P_g .== Float32.(P_g_ref)) end # Float64 (3D) P = zeros(Float64, nx, ny, nz); P_g = zeros(Float64, nx*dims[1], ny*dims[2], nz*dims[3]); P .= [z_g(iz,dz,P)*1e2 + y_g(iy,dy,P)*1e1 + x_g(ix,dx,P) for ix=1:size(P,1), iy=1:size(P,2), iz=1:size(P,3)]; P_g_ref = [z_g(iz,dz,P_g)*1e2 + y_g(iy,dy,P_g)*1e1 + x_g(ix,dx,P_g) for ix=1:size(P_g,1), iy=1:size(P_g,2), iz=1:size(P_g,3)]; P_g_ref .= -P_g_ref[1,1,1] .+ P_g_ref; # NOTE: We add the first value of P_g_ref to have it start at 0.0. gather!(P, P_g); if (me == 0) @test all(P_g .== Float64.(P_g_ref)) end # Int16 (2D) P = zeros(Int16, nx, ny); P_g = zeros(Int16, nx*dims[1], ny*dims[2], dims[3]); P .= [y_g(iy,dy,P)*1e1 + x_g(ix,dx,P) for ix=1:size(P,1), iy=1:size(P,2)]; P_g_ref = [y_g(iy,dy,P_g)*1e1 + x_g(ix,dx,P_g) for ix=1:size(P_g,1), iy=1:size(P_g,2), iz=1:size(P,3)]; P_g_ref .= -P_g_ref[1,1,1] .+ P_g_ref; # NOTE: We add the first value of P_g_ref to have it start at 0.0. gather!(P, P_g); if (me == 0) @test all(P_g .== Int16.(P_g_ref)) end finalize_global_grid(finalize_MPI=false); end; if (nprocs>1) @testset "non-default root" begin me, dims = init_global_grid(nx, 1, 1, quiet=true, init_MPI=false); A = zeros(nx); A_g = zeros(nx*dims[1]); A .= 1.0; root = 1; gather!(A, A_g; root=root); if (me == root) @test all(A_g .== 1.0) end finalize_global_grid(finalize_MPI=false); end; end @testset "nothing on non-root" begin me, dims = init_global_grid(nx, 1, 1, overlaps=(0,0,0), quiet=true, init_MPI=false); P = zeros(nx); P_g = (me == 0) ? zeros(nx*dims[1]) : nothing P .= [x_g(ix,dx,P) for ix=1:size(P,1)]; if (me == 0) P_g_ref = [x_g(ix,dx,P_g) for ix=1:size(P_g,1)]; P_g_ref .= -P_g_ref[1] .+ P_g_ref; # NOTE: We add the first value of P_g_ref to have it start at 0.0. end gather!(P, P_g); if (me == 0) @test all(P_g .== P_g_ref) end finalize_global_grid(finalize_MPI=false); end; end; end; ## Test tear down MPI.Finalize()
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
6040
push!(LOAD_PATH, "../src") using Test import MPI, CUDA, AMDGPU using ImplicitGlobalGrid; GG = ImplicitGlobalGrid import ImplicitGlobalGrid: @require ## Test setup (NOTE: Testset "2. initialization including MPI" completes the test setup as it initializes MPI and must therefore mandatorily be at the 2nd position). NOTE: these tests require nprocs == 1. p0 = MPI.PROC_NULL nx = 4; ny = 4; nz = 1; @testset "$(basename(@__FILE__))" begin @testset "1. pre-MPI_Init-exception" begin @require !GG.grid_is_initialized() @test_throws ErrorException init_global_grid(nx, ny, nz, quiet=true, init_MPI=false); # Error: init_MPI=false while MPI has not been initialized before. @test !GG.grid_is_initialized() end; @testset "2. initialization including MPI" begin me, dims, nprocs, coords, comm_cart = init_global_grid(nx, ny, nz, dimx=1, dimy=1, dimz=1, quiet=true); @testset "initialized" begin @test GG.grid_is_initialized() @test MPI.Initialized() end; @testset "return values" begin @test me == 0 @test dims == [1, 1, 1] @test nprocs == 1 @test coords == [0, 0, 0] @test typeof(comm_cart) == MPI.Comm end; @testset "values in global grid" begin @test GG.global_grid().nxyz_g == [nx, ny, nz] @test GG.global_grid().nxyz == [nx, ny, nz] @test GG.global_grid().dims == dims @test GG.global_grid().overlaps == [2, 2, 2] @test GG.global_grid().halowidths== [1, 1, 1] @test GG.global_grid().nprocs == nprocs @test GG.global_grid().me == me @test GG.global_grid().coords == coords @test GG.global_grid().neighbors == [p0 p0 p0; p0 p0 p0] @test GG.global_grid().periods == [0, 0, 0] @test GG.global_grid().disp == 1 @test GG.global_grid().reorder == 1 @test GG.global_grid().comm == comm_cart @test GG.global_grid().quiet == true end; finalize_global_grid(finalize_MPI=false); end; @testset "3. initialization with pre-initialized MPI" begin @require MPI.Initialized() @require !GG.grid_is_initialized() init_global_grid(nx, ny, nz, quiet=true, init_MPI=false); @test GG.grid_is_initialized() finalize_global_grid(finalize_MPI=false); end; @testset "4. initialization with periodic boundaries" begin nz=4; init_global_grid(nx, ny, nz, dimx=1, dimy=1, dimz=1, periodx=1, periodz=1, quiet=true, init_MPI=false); @testset "initialized" begin @test GG.grid_is_initialized() end; @testset "values in global grid" begin # (Checks only what is different than in the basic test.) @test GG.global_grid().nxyz_g == [nx-2, ny, nz-2] @test GG.global_grid().nxyz == [nx, ny, nz ] @test GG.global_grid().neighbors == [0 p0 0; 0 p0 0] @test GG.global_grid().periods == [1, 0, 1] end finalize_global_grid(finalize_MPI=false); end; @testset "5. initialization with non-default overlaps and one periodic boundary" begin nz = 10; olx = 3; oly = 0; olz = 4; init_global_grid(nx, ny, nz, dimx=1, dimy=1, dimz=1, periodz=1, overlaps=(olx, oly, olz), quiet=true, init_MPI=false); @testset "initialized" begin @test GG.grid_is_initialized() end @testset "values in global grid" begin # (Checks only what is different than in the basic test.) @test GG.global_grid().nxyz_g == [nx, ny, nz-olz] # Note: olx has no effect as there is only 1 process and this boundary is not periodic. @test GG.global_grid().nxyz == [nx, ny, nz ] @test GG.global_grid().overlaps == [olx, oly, olz] @test GG.global_grid().halowidths== [1, 1, 2] @test GG.global_grid().neighbors == [p0 p0 0; p0 p0 0] @test GG.global_grid().periods == [0, 0, 1] end; finalize_global_grid(finalize_MPI=false); end; @testset "6. post-MPI_Init-exceptions" begin @require MPI.Initialized() @require !GG.grid_is_initialized() nx = 4; ny = 4; nz = 4; @test_throws ErrorException init_global_grid(1, ny, nz, quiet=true, init_MPI=false); # Error: nx==1. @test_throws ErrorException init_global_grid(nx, 1, nz, quiet=true, init_MPI=false); # Error: ny==1, while nz>1. @test_throws ErrorException init_global_grid(nx, ny, 1, dimz=3, quiet=true, init_MPI=false); # Error: dimz>1 while nz==1. @test_throws ErrorException init_global_grid(nx, ny, 1, periodz=1, quiet=true, init_MPI=false); # Error: periodz==1 while nz==1. @test_throws ErrorException init_global_grid(nx, ny, nz, periody=1, overlaps=(2,3,2), quiet=true, init_MPI=false); # Error: periody==1 while ny<2*overlaps[2]-1 (4<5). @test_throws ErrorException init_global_grid(nx, ny, nz, halowidths=(1,0,1), quiet=true, init_MPI=false); # Error: halowidths[2]<1. @test_throws ErrorException init_global_grid(nx, ny, nz, overlaps=(4,3,2), halowidths=(2,2,1), quiet=true, init_MPI=false); # Error: halowidths[2]==2 while overlaps[2]==3. @test_throws ErrorException init_global_grid(nx, ny, nz, quiet=true); # Error: MPI already initialized @testset "already initialized exception" begin init_global_grid(nx, ny, nz, quiet=true, init_MPI=false); @require GG.grid_is_initialized() @test_throws ErrorException init_global_grid(nx, ny, nz, quiet=true, init_MPI=false); # Error: IGG already initialised finalize_global_grid(finalize_MPI=false); end; end; end; ## Test tear down MPI.Finalize()
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
3150
# NOTE: All tests of this file can be run with any number of processes. push!(LOAD_PATH, "../src") using Test import MPI using CUDA, AMDGPU using ImplicitGlobalGrid; GG = ImplicitGlobalGrid import ImplicitGlobalGrid: @require test_cuda = CUDA.functional() test_amdgpu = AMDGPU.functional() ## Test setup MPI.Init(); nprocs = MPI.Comm_size(MPI.COMM_WORLD); # NOTE: these tests can run with any number of processes. @testset "$(basename(@__FILE__)) (processes: $nprocs)" begin @testset "1. select_device" begin @static if test_cuda && !test_amdgpu @testset "\"CUDA\"" begin me, = init_global_grid(3, 4, 5; quiet=true, init_MPI=false, device_type="CUDA"); gpu_id = select_device(); @test gpu_id < length(CUDA.devices()) finalize_global_grid(finalize_MPI=false); end; @testset "\"auto\"" begin me, = init_global_grid(3, 4, 5; quiet=true, init_MPI=false, device_type="auto"); gpu_id = select_device(); @test gpu_id < length(CUDA.devices()) finalize_global_grid(finalize_MPI=false); end; end @static if test_amdgpu && !test_cuda @testset "\"AMDGPU\"" begin me, = init_global_grid(3, 4, 5; quiet=true, init_MPI=false, device_type="AMDGPU"); gpu_id = select_device(); @test gpu_id <= length(AMDGPU.devices()) finalize_global_grid(finalize_MPI=false); end; @testset "\"auto\"" begin me, = init_global_grid(3, 4, 5; quiet=true, init_MPI=false, device_type="auto"); gpu_id = select_device(); @test gpu_id <= length(AMDGPU.devices()) finalize_global_grid(finalize_MPI=false); end; end @static if !(test_cuda || test_amdgpu) || (test_cuda && test_amdgpu) @testset "\"auto\"" begin me, = init_global_grid(3, 4, 5; quiet=true, init_MPI=false, device_type="auto"); @test_throws ErrorException select_device() finalize_global_grid(finalize_MPI=false); end; end @static if !test_cuda @testset "\"CUDA\"" begin me, = init_global_grid(3, 4, 5; quiet=true, init_MPI=false, device_type="CUDA"); @test_throws ErrorException select_device() finalize_global_grid(finalize_MPI=false); end; end @static if !test_amdgpu @testset "\"AMDGPU\"" begin me, = init_global_grid(3, 4, 5; quiet=true, init_MPI=false, device_type="AMDGPU"); @test_throws ErrorException select_device() finalize_global_grid(finalize_MPI=false); end; end @testset "\"none\"" begin me, = init_global_grid(3, 4, 5; quiet=true, init_MPI=false, device_type="none"); @test_throws ErrorException select_device() finalize_global_grid(finalize_MPI=false); end end; end; ## Test tear down MPI.Finalize()
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
9359
push!(LOAD_PATH, "../src") using Test import MPI, CUDA, AMDGPU using ImplicitGlobalGrid; GG = ImplicitGlobalGrid import ImplicitGlobalGrid: @require macro coords(i) :(GG.global_grid().coords[$i]) end ## Test setup MPI.Init(); nprocs = MPI.Comm_size(MPI.COMM_WORLD); @require nprocs == 1 # NOTE: these tests require nprocs == 1. @testset "$(basename(@__FILE__))" begin @testset "1. *_g functions" begin lx = 8; ly = 8; lz = 8; nx = 5; ny = 5; nz = 5; P = zeros(nx, ny, nz ); Vx = zeros(nx+1,ny, nz ); Vz = zeros(nx, ny, nz+1); A = zeros(nx, ny, nz+2); Sxz = zeros(nx-2,ny-1,nz-2); init_global_grid(nx, ny, nz, dimx=1, dimy=1, dimz=1, periodz=1, quiet=true, init_MPI=false); @testset "nx_g / ny_g / nz_g" begin @test nx_g() == nx @test ny_g() == ny @test nz_g() == nz-2 end; @testset "x_g / y_g / z_g" begin dx = lx/(nx_g()-1); dy = ly/(ny_g()-1); dz = lz/(nz_g()-1); # (for P) @test [x_g(ix,dx,P) for ix = 1:size(P,1)] == [0.0, 2.0, 4.0, 6.0, 8.0] @test [y_g(iy,dy,P) for iy = 1:size(P,2)] == [0.0, 2.0, 4.0, 6.0, 8.0] @test [z_g(iz,dz,P) for iz = 1:size(P,3)] == [8.0, 0.0, 4.0, 8.0, 0.0] # (for Vx) @test [x_g(ix,dx,Vx) for ix = 1:size(Vx,1)] == [-1.0, 1.0, 3.0, 5.0, 7.0, 9.0] @test [y_g(iy,dy,Vx) for iy = 1:size(Vx,2)] == [0.0, 2.0, 4.0, 6.0, 8.0] @test [z_g(iz,dz,Vx) for iz = 1:size(Vx,3)] == [8.0, 0.0, 4.0, 8.0, 0.0] # (for Vz) @test [x_g(ix,dx,Vz) for ix = 1:size(Vz,1)] == [0.0, 2.0, 4.0, 6.0, 8.0] @test [y_g(iy,dy,Vz) for iy = 1:size(Vz,2)] == [0.0, 2.0, 4.0, 6.0, 8.0] @test [z_g(iz,dz,Vz) for iz = 1:size(Vz,3)] == [ 6.0, 10.0, 2.0, 6.0, 10.0, 2.0] # base grid (z dim): [ 8.0, 0.0, 4.0, 8.0, 0.0] # possible alternative: [ 6.0, -2.0, 2.0, 6.0, -2.0, 2.0] # This would be a possible alternative way to define {x,y,z}_g; however, we decided that the grid should start at 0.0 in this case and the overlap be at the end (we avoid completely any negative z_g). # wrong: [ 6.0, -2.0, 2.0, 6.0, 10.0, 2.0] # The 2nd and the 2nd-last cell must be the same due to the overlap of 3. # (for A) @test [x_g(ix,dx,A) for ix = 1:size(A,1)] == [0.0, 2.0, 4.0, 6.0, 8.0] @test [y_g(iy,dy,A) for iy = 1:size(A,2)] == [0.0, 2.0, 4.0, 6.0, 8.0] @test [z_g(iz,dz,A) for iz = 1:size(A,3)] == [4.0, 8.0, 0.0, 4.0, 8.0, 0.0, 4.0] # base grid (z dim): [ 8.0, 0.0, 4.0, 8.0, 0.0] # (for Sxz) @test [x_g(ix,dx,Sxz) for ix = 1:size(Sxz,1)] == [2.0, 4.0, 6.0] # base grid (x dim): [0.0, 2.0, 4.0, 6.0, 8.0] @test [y_g(iy,dy,Sxz) for iy = 1:size(Sxz,2)] == [1.0, 3.0, 5.0, 7.0] # base grid (y dim): [0.0, 2.0, 4.0, 6.0, 8.0] @test [z_g(iz,dz,Sxz) for iz = 1:size(Sxz,3)] == [0.0, 4.0, 8.0] # base grid (z dim): [ 8.0, 0.0, 4.0, 8.0, 0.0] end; finalize_global_grid(finalize_MPI=false); end; @testset "2. *_g functions with non-default overlap" begin lx = 8; ly = 8; lz = 8; nx = 5; ny = 5; nz = 8; P = zeros(nx, ny, nz ); Vx = zeros(nx+1,ny, nz ); Vz = zeros(nx, ny, nz+1); A = zeros(nx, ny, nz+2); Sxz = zeros(nx-2,ny-1,nz-2); init_global_grid(nx, ny, nz, dimx=1, dimy=1, dimz=1, periodz=1, overlaps=(3,2,3), quiet=true, init_MPI=false); @testset "nx_g / ny_g / nz_g" begin @test nx_g() == nx @test ny_g() == ny @test nz_g() == nz-3 end; @testset "x_g / y_g / z_g" begin dx = lx/(nx_g()-1); dy = ly/(ny_g()-1); dz = lz/(nz_g()-1); # (for P) @test [x_g(ix,dx,P) for ix = 1:size(P,1)] == [0.0, 2.0, 4.0, 6.0, 8.0] # (same as in the first test) @test [y_g(iy,dy,P) for iy = 1:size(P,2)] == [0.0, 2.0, 4.0, 6.0, 8.0] # (same as in the first test) @test [z_g(iz,dz,P) for iz = 1:size(P,3)] == [8.0, 0.0, 2.0, 4.0, 6.0, 8.0, 0.0, 2.0] # (for Vz) @test [x_g(ix,dx,Vz) for ix = 1:size(Vz,1)] == [0.0, 2.0, 4.0, 6.0, 8.0] # (same as in the first test) @test [y_g(iy,dy,Vz) for iy = 1:size(Vz,2)] == [0.0, 2.0, 4.0, 6.0, 8.0] # (same as in the first test) @test [z_g(iz,dz,Vz) for iz = 1:size(Vz,3)] == [7.0, 9.0, 1.0, 3.0, 5.0, 7.0, 9.0, 1.0, 3.0] # base grid (z dim): [8.0, 0.0, 2.0, 4.0, 6.0. 8.0, 0.0, 2.0] # possible alternative: [7.0,-1.0, 1.0, 3.0, 5.0, 7.0,-1.0, 1.0, 3.0] # (for A) @test [x_g(ix,dx,A) for ix = 1:size(A,1)] == [0.0, 2.0, 4.0, 6.0, 8.0] # (same as in the first test) @test [y_g(iy,dy,A) for iy = 1:size(A,2)] == [0.0, 2.0, 4.0, 6.0, 8.0] # (same as in the first test) @test [z_g(iz,dz,A) for iz = 1:size(A,3)] == [6.0, 8.0, 0.0, 2.0, 4.0, 6.0, 8.0, 0.0, 2.0, 4.0] # base grid (z dim): [8.0, 0.0, 2.0, 4.0, 6.0, 8.0, 0.0, 2.0] # (for Sxz) @test [x_g(ix,dx,Sxz) for ix = 1:size(Sxz,1)] == [2.0, 4.0, 6.0] # (same as in the first test) # base grid (x dim): [0.0, 2.0, 4.0, 6.0, 8.0] @test [y_g(iy,dy,Sxz) for iy = 1:size(Sxz,2)] == [1.0, 3.0, 5.0, 7.0] # (same as in the first test) # base grid (y dim): [0.0, 2.0, 4.0, 6.0, 8.0] @test [z_g(iz,dz,Sxz) for iz = 1:size(Sxz,3)] == [0.0, 2.0, 4.0, 6.0, 8.0, 0.0] # base grid (z dim): [8.0, 0.0, 2.0, 4.0, 6.0, 8.0, 0.0, 2.0] end; finalize_global_grid(finalize_MPI=false); end; @testset "3. *_g functions (simulated 3x3x3 processes)" begin lx = 20; ly = 20; lz = 16; nx = 5; ny = 5; nz = 5; P = zeros(nx, ny, nz ); A = zeros(nx+1,ny-2,nz+2); init_global_grid(nx, ny, nz, dimx=1, dimy=1, dimz=1, periodz=1, quiet=true, init_MPI=false); # (Set dims, nprocs and nxyz_g in GG.global_grid().) dims = [3,3,3]; nxyz = GG.global_grid().nxyz; periods = GG.global_grid().periods; overlaps = GG.global_grid().overlaps; nprocs = prod(dims); nxyz_g = dims.*(nxyz.-overlaps) .+ overlaps.*(periods.==0); GG.global_grid().dims .= dims; GG.global_grid().nxyz_g .= nxyz_g; @testset "nx_g / ny_g / nz_g" begin @test nx_g() == nxyz_g[1] @test ny_g() == nxyz_g[2] @test nz_g() == nxyz_g[3] end; @testset "x_g / y_g / z_g" begin dx = lx/(nx_g()-1); dy = ly/(ny_g()-1); dz = lz/(nz_g()-1); # (for P) @coords(1)=0; @test [x_g(ix,dx,P) for ix = 1:size(P,1)] == [0.0, 2.0, 4.0, 6.0, 8.0] @coords(1)=1; @test [x_g(ix,dx,P) for ix = 1:size(P,1)] == [6.0, 8.0, 10.0, 12.0, 14.0] @coords(1)=2; @test [x_g(ix,dx,P) for ix = 1:size(P,1)] == [12.0, 14.0, 16.0, 18.0, 20.0] @coords(2)=0; @test [y_g(iy,dy,P) for iy = 1:size(P,2)] == [0.0, 2.0, 4.0, 6.0, 8.0] @coords(2)=1; @test [y_g(iy,dy,P) for iy = 1:size(P,2)] == [6.0, 8.0, 10.0, 12.0, 14.0] @coords(2)=2; @test [y_g(iy,dy,P) for iy = 1:size(P,2)] == [12.0, 14.0, 16.0, 18.0, 20.0] @coords(3)=0; @test [z_g(iz,dz,P) for iz = 1:size(P,3)] == [16.0, 0.0, 2.0, 4.0, 6.0] @coords(3)=1; @test [z_g(iz,dz,P) for iz = 1:size(P,3)] == [4.0, 6.0, 8.0, 10.0, 12.0] @coords(3)=2; @test [z_g(iz,dz,P) for iz = 1:size(P,3)] == [10.0, 12.0, 14.0, 16.0, 0.0] # (for A) @coords(1)=0; @test [x_g(ix,dx,A) for ix = 1:size(A,1)] == [-1.0, 1.0, 3.0, 5.0, 7.0, 9.0] @coords(1)=1; @test [x_g(ix,dx,A) for ix = 1:size(A,1)] == [5.0, 7.0, 9.0, 11.0, 13.0, 15.0] @coords(1)=2; @test [x_g(ix,dx,A) for ix = 1:size(A,1)] == [11.0, 13.0, 15.0, 17.0, 19.0, 21.0] @coords(2)=0; @test [y_g(iy,dy,A) for iy = 1:size(A,2)] == [2.0, 4.0, 6.0] @coords(2)=1; @test [y_g(iy,dy,A) for iy = 1:size(A,2)] == [8.0, 10.0, 12.0] @coords(2)=2; @test [y_g(iy,dy,A) for iy = 1:size(A,2)] == [14.0, 16.0, 18.0] @coords(3)=0; @test [z_g(iz,dz,A) for iz = 1:size(A,3)] == [14.0, 16.0, 0.0, 2.0, 4.0, 6.0, 8.0] @coords(3)=1; @test [z_g(iz,dz,A) for iz = 1:size(A,3)] == [2.0, 4.0, 6.0, 8.0, 10.0, 12.0, 14.0] @coords(3)=2; @test [z_g(iz,dz,A) for iz = 1:size(A,3)] == [8.0, 10.0, 12.0, 14.0, 16.0, 0.0, 2.0] end; finalize_global_grid(finalize_MPI=false); end; end; ## Test tear down MPI.Finalize()
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
code
77804
# NOTE: All tests of this file can be run with any number of processes. # Nearly all of the functionality can however be verified with one single process # (thanks to the usage of periodic boundaries in most of the full halo update tests). push!(LOAD_PATH, "../src") using Test import MPI, Polyester using CUDA, AMDGPU using ImplicitGlobalGrid; GG = ImplicitGlobalGrid import ImplicitGlobalGrid: @require, longnameof test_cuda = CUDA.functional() test_amdgpu = AMDGPU.functional() array_types = ["CPU"] gpu_array_types = [] device_types = ["auto"] gpu_device_types = [] allocators = Function[zeros] gpu_allocators = [] ArrayConstructors = [Array] GPUArrayConstructors = [] CPUArray = Array if test_cuda cuzeros = CUDA.zeros push!(array_types, "CUDA") push!(gpu_array_types, "CUDA") push!(device_types, "CUDA") push!(gpu_device_types, "CUDA") push!(allocators, cuzeros) push!(gpu_allocators, cuzeros) push!(ArrayConstructors, CuArray) push!(GPUArrayConstructors, CuArray) end if test_amdgpu roczeros = AMDGPU.zeros push!(array_types, "AMDGPU") push!(gpu_array_types, "AMDGPU") push!(device_types, "AMDGPU") push!(gpu_device_types, "AMDGPU") push!(allocators, roczeros) push!(gpu_allocators, roczeros) push!(ArrayConstructors, ROCArray) push!(GPUArrayConstructors, ROCArray) end ## Test setup MPI.Init(); nprocs = MPI.Comm_size(MPI.COMM_WORLD); # NOTE: these tests can run with any number of processes. ndims_mpi = GG.NDIMS_MPI; nneighbors_per_dim = GG.NNEIGHBORS_PER_DIM; # Should be 2 (one left and one right neighbor). nx = 7; ny = 5; nz = 6; dx = 1.0 dy = 1.0 dz = 1.0 @testset "$(basename(@__FILE__)) (processes: $nprocs)" begin @testset "1. argument check ($array_type arrays)" for (array_type, device_type, zeros) in zip(array_types, device_types, allocators) init_global_grid(nx, ny, nz; quiet=true, init_MPI=false, device_type=device_type); P = zeros(nx, ny, nz ); Sxz = zeros(nx-2,ny-1,nz-2); A = zeros(nx-1,ny+2,nz+1); A2 = A; Z = zeros(ComplexF64, nx-1,ny+2,nz+1); Z2 = Z; @test_throws ErrorException update_halo!(P, Sxz, A) # Error: Sxz has no halo. @test_throws ErrorException update_halo!(P, Sxz, A, Sxz) # Error: Sxz and Sxz have no halo. @test_throws ErrorException update_halo!(A, (A=P, halowidths=(1,0,1))) # Error: P has an invalid halowidth (less than 1). @test_throws ErrorException update_halo!(A, (A=P, halowidths=(2,2,2))) # Error: P has no halo. @test_throws ErrorException update_halo!((A=A, halowidths=(0,3,2)), (A=P, halowidths=(2,2,2))) # Error: A and P have no halo. @test_throws ErrorException update_halo!(P, A, A) # Error: A is given twice. @test_throws ErrorException update_halo!(P, A, A2) # Error: A2 is duplicate of A (an alias; it points to the same memory). @test_throws ErrorException update_halo!(P, A, A, A2) # Error: the second A and A2 are duplicates of the first A. @test_throws ErrorException update_halo!(Z, Z2) # Error: Z2 is duplicate of Z (an alias; it points to the same memory). @test_throws ErrorException update_halo!(Z, P) # Error: P is of different type than Z. @test_throws ErrorException update_halo!(Z, P, A) # Error: P and A are of different type than Z. finalize_global_grid(finalize_MPI=false); end; @testset "2. buffer allocation ($array_type arrays)" for (array_type, device_type, zeros) in zip(array_types, device_types, allocators) init_global_grid(nx, ny, nz, periodx=1, periody=1, periodz=1, quiet=true, init_MPI=false, device_type=device_type); P = zeros(nx, ny, nz ); A = zeros(nx-1,ny+2,nz+1); B = zeros(Float32, nx+1, ny+2, nz+3); C = zeros(Float32, nx+1, ny+1, nz+1); Z = zeros(ComplexF16, nx, ny, nz ); Y = zeros(ComplexF16, nx-1, ny+2, nz+1); P, A, B, C, Z, Y = GG.wrap_field.((P, A, B, C, Z, Y)); halowidths = (3,1,2); A_hw, Z_hw = GG.wrap_field(A.A, halowidths), GG.wrap_field(Z.A, halowidths); @testset "free buffers" begin @require GG.get_sendbufs_raw() === nothing @require GG.get_recvbufs_raw() === nothing GG.allocate_bufs(P); @require GG.get_sendbufs_raw() !== nothing @require GG.get_recvbufs_raw() !== nothing GG.free_update_halo_buffers(); @test GG.get_sendbufs_raw() === nothing @test GG.get_recvbufs_raw() === nothing end; @testset "allocate single" begin GG.free_update_halo_buffers(); GG.allocate_bufs(P); for bufs_raw in [GG.get_sendbufs_raw(), GG.get_recvbufs_raw()] @test length(bufs_raw) == 1 # 1 array @test length(bufs_raw[1]) == nneighbors_per_dim # 2 neighbors per dimension for n = 1:nneighbors_per_dim @test length(bufs_raw[1][n]) >= prod(sort([size(P)...])[2:end]) # required length: max halo elements in any of the dimensions end end end; @testset "allocate single (Complex)" begin GG.free_update_halo_buffers(); GG.allocate_bufs(Z); for bufs_raw in [GG.get_sendbufs_raw(), GG.get_recvbufs_raw()] @test length(bufs_raw) == 1 # 1 array @test length(bufs_raw[1]) == nneighbors_per_dim # 2 neighbors per dimension for n = 1:nneighbors_per_dim @test length(bufs_raw[1][n]) >= prod(sort([size(Z)...])[2:end]) # required length: max halo elements in any of the dimensions end end end; @testset "allocate single (halowidth > 1)" begin GG.free_update_halo_buffers(); GG.allocate_bufs(A_hw); max_halo_elems = maximum((size(A,1)*size(A,2)*halowidths[3], size(A,1)*size(A,3)*halowidths[2], size(A,2)*size(A,3)*halowidths[1])); for bufs_raw in [GG.get_sendbufs_raw(), GG.get_recvbufs_raw()] @test length(bufs_raw) == 1 # 1 array @test length(bufs_raw[1]) == nneighbors_per_dim # 2 neighbors per dimension for n = 1:nneighbors_per_dim @test length(bufs_raw[1][n]) >= max_halo_elems # required length: max halo elements in any of the dimensions end end end; @testset "keep 1st, allocate 2nd" begin GG.free_update_halo_buffers(); GG.allocate_bufs(P); GG.allocate_bufs(A, P); for bufs_raw in [GG.get_sendbufs_raw(), GG.get_recvbufs_raw()] @test length(bufs_raw) == 2 # 2 arrays @test length(bufs_raw[1]) == nneighbors_per_dim # 2 neighbors per dimension @test length(bufs_raw[2]) == nneighbors_per_dim # 2 neighbors per dimension for n = 1:nneighbors_per_dim @test length(bufs_raw[1][n]) >= prod(sort([size(A)...])[2:end]) # required length: max halo elements in any of the dimensions @test length(bufs_raw[2][n]) >= prod(sort([size(P)...])[2:end]) # ... end end end; @testset "keep 1st, allocate 2nd (Complex)" begin GG.free_update_halo_buffers(); GG.allocate_bufs(Z); GG.allocate_bufs(Y, Z); for bufs_raw in [GG.get_sendbufs_raw(), GG.get_recvbufs_raw()] @test length(bufs_raw) == 2 # 2 arrays @test length(bufs_raw[1]) == nneighbors_per_dim # 2 neighbors per dimension @test length(bufs_raw[2]) == nneighbors_per_dim # 2 neighbors per dimension for n = 1:nneighbors_per_dim @test length(bufs_raw[1][n]) >= prod(sort([size(Y)...])[2:end]) # required length: max halo elements in any of the dimensions @test length(bufs_raw[2][n]) >= prod(sort([size(Z)...])[2:end]) # ... end end end; @testset "reinterpret (no allocation)" begin GG.free_update_halo_buffers(); GG.allocate_bufs(A, P); GG.allocate_bufs(B, C); # The new arrays contain Float32 (A, and P were Float64); B and C have a halo with more elements than A and P had, but they require less space in memory for bufs_raw in [GG.get_sendbufs_raw(), GG.get_recvbufs_raw()] @test length(bufs_raw) == 2 # Still 2 arrays: B, C (even though they are different then before: was A and P) @test length(bufs_raw[1]) == nneighbors_per_dim # 2 neighbors per dimension @test length(bufs_raw[2]) == nneighbors_per_dim # 2 neighbors per dimension for n = 1:nneighbors_per_dim @test length(bufs_raw[1][n]) >= prod(sort([size(B)...])[2:end]) # required length: max halo elements in any of the dimensions @test length(bufs_raw[2][n]) >= prod(sort([size(C)...])[2:end]) # ... end @test all([eltype(bufs_raw[i][n]) == Float32 for i=1:length(bufs_raw), n=1:nneighbors_per_dim]) end end; @testset "reinterpret (no allocation) (Complex)" begin GG.free_update_halo_buffers(); GG.allocate_bufs(A, P); GG.allocate_bufs(Y, Z); # The new arrays contain Float32 (A, and P were Float64); B and C have a halo with more elements than A and P had, but they require less space in memory for bufs_raw in [GG.get_sendbufs_raw(), GG.get_recvbufs_raw()] @test length(bufs_raw) == 2 # Still 2 arrays: B, C (even though they are different then before: was A and P) @test length(bufs_raw[1]) == nneighbors_per_dim # 2 neighbors per dimension @test length(bufs_raw[2]) == nneighbors_per_dim # 2 neighbors per dimension for n = 1:nneighbors_per_dim @test length(bufs_raw[1][n]) >= prod(sort([size(Y)...])[2:end]) # required length: max halo elements in any of the dimensions @test length(bufs_raw[2][n]) >= prod(sort([size(Z)...])[2:end]) # ... end @test all([eltype(bufs_raw[i][n]) == ComplexF16 for i=1:length(bufs_raw), n=1:nneighbors_per_dim]) end end; @testset "(cu/roc)sendbuf / (cu/roc)recvbuf" begin sendbuf, recvbuf = (GG.sendbuf, GG.recvbuf); if array_type in ["CUDA", "AMDGPU"] sendbuf, recvbuf = (GG.gpusendbuf, GG.gpurecvbuf); end GG.free_update_halo_buffers(); GG.allocate_bufs(A, P); for dim = 1:ndims(A), n = 1:nneighbors_per_dim @test all(length(sendbuf(n,dim,1,A)) .== prod(size(A)[1:ndims(A).!=dim])) @test all(length(recvbuf(n,dim,1,A)) .== prod(size(A)[1:ndims(A).!=dim])) @test all(size(sendbuf(n,dim,1,A))[dim] .== A.halowidths[dim]) @test all(size(recvbuf(n,dim,1,A))[dim] .== A.halowidths[dim]) end for dim = 1:ndims(P), n = 1:nneighbors_per_dim @test all(length(sendbuf(n,dim,2,P)) .== prod(size(P)[1:ndims(P).!=dim])) @test all(length(recvbuf(n,dim,2,P)) .== prod(size(P)[1:ndims(P).!=dim])) @test all(size(sendbuf(n,dim,2,P))[dim] .== P.halowidths[dim]) @test all(size(recvbuf(n,dim,2,P))[dim] .== P.halowidths[dim]) end end; @testset "(cu/roc)sendbuf / (cu/roc)recvbuf (Complex)" begin sendbuf, recvbuf = (GG.sendbuf, GG.recvbuf); if array_type in ["CUDA", "AMDGPU"] sendbuf, recvbuf = (GG.gpusendbuf, GG.gpurecvbuf); end GG.free_update_halo_buffers(); GG.allocate_bufs(Y, Z); for dim = 1:ndims(Y), n = 1:nneighbors_per_dim @test all(length(sendbuf(n,dim,1,Y)) .== prod(size(Y)[1:ndims(Y).!=dim])) @test all(length(recvbuf(n,dim,1,Y)) .== prod(size(Y)[1:ndims(Y).!=dim])) @test all(size(sendbuf(n,dim,1,Y))[dim] .== Y.halowidths[dim]) @test all(size(recvbuf(n,dim,1,Y))[dim] .== Y.halowidths[dim]) end for dim = 1:ndims(Z), n = 1:nneighbors_per_dim @test all(length(sendbuf(n,dim,2,Z)) .== prod(size(Z)[1:ndims(Z).!=dim])) @test all(length(recvbuf(n,dim,2,Z)) .== prod(size(Z)[1:ndims(Z).!=dim])) @test all(size(sendbuf(n,dim,2,Z))[dim] .== Z.halowidths[dim]) @test all(size(recvbuf(n,dim,2,Z))[dim] .== Z.halowidths[dim]) end end; @testset "(cu/roc)sendbuf / (cu/roc)recvbuf (halowidth > 1)" begin sendbuf, recvbuf = (GG.sendbuf, GG.recvbuf); if array_type in ["CUDA", "AMDGPU"] sendbuf, recvbuf = (GG.gpusendbuf, GG.gpurecvbuf); end GG.free_update_halo_buffers(); GG.allocate_bufs(A_hw); for dim = 1:ndims(A_hw), n = 1:nneighbors_per_dim @test all(length(sendbuf(n,dim,1,A_hw)) .== prod(size(A_hw)[1:ndims(A_hw).!=dim])*A_hw.halowidths[dim]) @test all(length(recvbuf(n,dim,1,A_hw)) .== prod(size(A_hw)[1:ndims(A_hw).!=dim])*A_hw.halowidths[dim]) @test all(size(sendbuf(n,dim,1,A_hw))[dim] .== A_hw.halowidths[dim]) @test all(size(recvbuf(n,dim,1,A_hw))[dim] .== A_hw.halowidths[dim]) end end; @testset "(cu/roc)sendbuf / (cu/roc)recvbuf (halowidth > 1, Complex)" begin sendbuf, recvbuf = (GG.sendbuf, GG.recvbuf); if array_type in ["CUDA", "AMDGPU"] sendbuf, recvbuf = (GG.gpusendbuf, GG.gpurecvbuf); end GG.free_update_halo_buffers(); GG.allocate_bufs(Z_hw); for dim = 1:ndims(Z_hw), n = 1:nneighbors_per_dim @test all(length(sendbuf(n,dim,1,Z_hw)) .== prod(size(Z_hw)[1:ndims(Z_hw).!=dim])*Z_hw.halowidths[dim]) @test all(length(recvbuf(n,dim,1,Z_hw)) .== prod(size(Z_hw)[1:ndims(Z_hw).!=dim])*Z_hw.halowidths[dim]) @test all(size(sendbuf(n,dim,1,Z_hw))[dim] .== Z_hw.halowidths[dim]) @test all(size(recvbuf(n,dim,1,Z_hw))[dim] .== Z_hw.halowidths[dim]) end end; finalize_global_grid(finalize_MPI=false); end; @testset "3. data transfer components" begin @testset "iwrite_sendbufs! / iread_recvbufs!" begin @testset "sendranges / recvranges ($array_type arrays)" for (array_type, device_type, zeros) in zip(array_types, device_types, allocators) init_global_grid(nx, ny, nz; periodx=1, periody=1, periodz=1, overlaps=(2,2,3), quiet=true, init_MPI=false, device_type=device_type); P = zeros(nx, ny, nz ); A = zeros(nx-1,ny+2,nz+1); P, A = GG.wrap_field.((P, A)); @test GG.sendranges(1, 1, P) == [ 2:2, 1:size(P,2), 1:size(P,3)] @test GG.sendranges(2, 1, P) == [size(P,1)-1:size(P,1)-1, 1:size(P,2), 1:size(P,3)] @test GG.sendranges(1, 2, P) == [ 1:size(P,1), 2:2, 1:size(P,3)] @test GG.sendranges(2, 2, P) == [ 1:size(P,1), size(P,2)-1:size(P,2)-1, 1:size(P,3)] @test GG.sendranges(1, 3, P) == [ 1:size(P,1), 1:size(P,2), 3:3] @test GG.sendranges(2, 3, P) == [ 1:size(P,1), 1:size(P,2), size(P,3)-2:size(P,3)-2] @test GG.recvranges(1, 1, P) == [ 1:1, 1:size(P,2), 1:size(P,3)] @test GG.recvranges(2, 1, P) == [ size(P,1):size(P,1), 1:size(P,2), 1:size(P,3)] @test GG.recvranges(1, 2, P) == [ 1:size(P,1), 1:1, 1:size(P,3)] @test GG.recvranges(2, 2, P) == [ 1:size(P,1), size(P,2):size(P,2), 1:size(P,3)] @test GG.recvranges(1, 3, P) == [ 1:size(P,1), 1:size(P,2), 1:1] @test GG.recvranges(2, 3, P) == [ 1:size(P,1), 1:size(P,2), size(P,3):size(P,3)] @test_throws ErrorException GG.sendranges(1, 1, A) @test_throws ErrorException GG.sendranges(2, 1, A) @test GG.sendranges(1, 2, A) == [ 1:size(A,1), 4:4, 1:size(A,3)] @test GG.sendranges(2, 2, A) == [ 1:size(A,1), size(A,2)-3:size(A,2)-3, 1:size(A,3)] @test GG.sendranges(1, 3, A) == [ 1:size(A,1), 1:size(A,2), 4:4] @test GG.sendranges(2, 3, A) == [ 1:size(A,1), 1:size(A,2), size(A,3)-3:size(A,3)-3] @test_throws ErrorException GG.recvranges(1, 1, A) @test_throws ErrorException GG.recvranges(2, 1, A) @test GG.recvranges(1, 2, A) == [ 1:size(A,1), 1:1, 1:size(A,3)] @test GG.recvranges(2, 2, A) == [ 1:size(A,1), size(A,2):size(A,2), 1:size(A,3)] @test GG.recvranges(1, 3, A) == [ 1:size(A,1), 1:size(A,2), 1:1] @test GG.recvranges(2, 3, A) == [ 1:size(A,1), 1:size(A,2), size(A,3):size(A,3)] finalize_global_grid(finalize_MPI=false); end; @testset "sendranges / recvranges (halowidth > 1, $array_type arrays)" for (array_type, device_type, zeros) in zip(array_types, device_types, allocators) nx = 13; ny = 9; nz = 9; init_global_grid(nx, ny, nz; periodx=1, periody=1, periodz=1, overlaps=(6,4,4), halowidths=(3,1,2), quiet=true, init_MPI=false, device_type=device_type); P = zeros(nx, ny, nz ); A = zeros(nx-1,ny+2,nz+1); P, A = GG.wrap_field.((P, A)); @test GG.sendranges(1, 1, P) == [ 4:6, 1:size(P,2), 1:size(P,3)] @test GG.sendranges(2, 1, P) == [size(P,1)-5:size(P,1)-3, 1:size(P,2), 1:size(P,3)] @test GG.sendranges(1, 2, P) == [ 1:size(P,1), 4:4, 1:size(P,3)] @test GG.sendranges(2, 2, P) == [ 1:size(P,1), size(P,2)-3:size(P,2)-3, 1:size(P,3)] @test GG.sendranges(1, 3, P) == [ 1:size(P,1), 1:size(P,2), 3:4] @test GG.sendranges(2, 3, P) == [ 1:size(P,1), 1:size(P,2), size(P,3)-3:size(P,3)-2] @test GG.recvranges(1, 1, P) == [ 1:3, 1:size(P,2), 1:size(P,3)] @test GG.recvranges(2, 1, P) == [ size(P,1)-2:size(P,1), 1:size(P,2), 1:size(P,3)] @test GG.recvranges(1, 2, P) == [ 1:size(P,1), 1:1, 1:size(P,3)] @test GG.recvranges(2, 2, P) == [ 1:size(P,1), size(P,2):size(P,2), 1:size(P,3)] @test GG.recvranges(1, 3, P) == [ 1:size(P,1), 1:size(P,2), 1:2] @test GG.recvranges(2, 3, P) == [ 1:size(P,1), 1:size(P,2), size(P,3)-1:size(P,3)] @test_throws ErrorException GG.sendranges(1, 1, A) @test_throws ErrorException GG.sendranges(2, 1, A) @test GG.sendranges(1, 2, A) == [ 1:size(A,1), 6:6, 1:size(A,3)] @test GG.sendranges(2, 2, A) == [ 1:size(A,1), size(A,2)-5:size(A,2)-5, 1:size(A,3)] @test GG.sendranges(1, 3, A) == [ 1:size(A,1), 1:size(A,2), 4:5] @test GG.sendranges(2, 3, A) == [ 1:size(A,1), 1:size(A,2), size(A,3)-4:size(A,3)-3] @test_throws ErrorException GG.recvranges(1, 1, A) @test_throws ErrorException GG.recvranges(2, 1, A) @test GG.recvranges(1, 2, A) == [ 1:size(A,1), 1:1, 1:size(A,3)] @test GG.recvranges(2, 2, A) == [ 1:size(A,1), size(A,2):size(A,2), 1:size(A,3)] @test GG.recvranges(1, 3, A) == [ 1:size(A,1), 1:size(A,2), 1:2] @test GG.recvranges(2, 3, A) == [ 1:size(A,1), 1:size(A,2), size(A,3)-1:size(A,3)] finalize_global_grid(finalize_MPI=false); end; @testset "write_h2h! / read_h2h!" begin init_global_grid(nx, ny, nz; quiet=true, init_MPI=false); P = zeros(nx, ny, nz ); P .= [iz*1e2 + iy*1e1 + ix for ix=1:size(P,1), iy=1:size(P,2), iz=1:size(P,3)]; P2 = zeros(size(P)); halowidths = (1,1,1) # (dim=1) buf = zeros(halowidths[1], size(P,2), size(P,3)); ranges = [2:2, 1:size(P,2), 1:size(P,3)]; GG.write_h2h!(buf, P, ranges, 1); @test all(buf[:] .== P[ranges[1],ranges[2],ranges[3]][:]) GG.read_h2h!(buf, P2, ranges, 1); @test all(buf[:] .== P2[ranges[1],ranges[2],ranges[3]][:]) # (dim=2) buf = zeros(size(P,1), halowidths[2], size(P,3)); ranges = [1:size(P,1), 3:3, 1:size(P,3)]; GG.write_h2h!(buf, P, ranges, 2); @test all(buf[:] .== P[ranges[1],ranges[2],ranges[3]][:]) GG.read_h2h!(buf, P2, ranges, 2); @test all(buf[:] .== P2[ranges[1],ranges[2],ranges[3]][:]) # (dim=3) buf = zeros(size(P,1), size(P,2), halowidths[3]); ranges = [1:size(P,1), 1:size(P,2), 4:4]; GG.write_h2h!(buf, P, ranges, 3); @test all(buf[:] .== P[ranges[1],ranges[2],ranges[3]][:]) GG.read_h2h!(buf, P2, ranges, 3); @test all(buf[:] .== P2[ranges[1],ranges[2],ranges[3]][:]) finalize_global_grid(finalize_MPI=false); end; @testset "write_h2h! / read_h2h! (halowidth > 1)" begin init_global_grid(nx, ny, nz; quiet=true, init_MPI=false); P = zeros(nx, ny, nz ); P .= [iz*1e2 + iy*1e1 + ix for ix=1:size(P,1), iy=1:size(P,2), iz=1:size(P,3)]; P2 = zeros(size(P)); halowidths = (3,1,2); # (dim=1) buf = zeros(halowidths[1], size(P,2), size(P,3)); ranges = [4:6, 1:size(P,2), 1:size(P,3)]; GG.write_h2h!(buf, P, ranges, 1); @test all(buf[:] .== P[ranges[1],ranges[2],ranges[3]][:]) GG.read_h2h!(buf, P2, ranges, 1); @test all(buf[:] .== P2[ranges[1],ranges[2],ranges[3]][:]) # (dim=2) buf = zeros(size(P,1), halowidths[2], size(P,3)); ranges = [1:size(P,1), 4:4, 1:size(P,3)]; GG.write_h2h!(buf, P, ranges, 2); @test all(buf[:] .== P[ranges[1],ranges[2],ranges[3]][:]) GG.read_h2h!(buf, P2, ranges, 2); @test all(buf[:] .== P2[ranges[1],ranges[2],ranges[3]][:]) # (dim=3) buf = zeros(size(P,1), size(P,2), halowidths[3]); ranges = [1:size(P,1), 1:size(P,2), 3:4]; GG.write_h2h!(buf, P, ranges, 3); @test all(buf[:] .== P[ranges[1],ranges[2],ranges[3]][:]) GG.read_h2h!(buf, P2, ranges, 3); @test all(buf[:] .== P2[ranges[1],ranges[2],ranges[3]][:]) finalize_global_grid(finalize_MPI=false); end; @static if test_cuda || test_amdgpu @testset "write_d2x! / write_d2h_async! / read_x2d! / read_h2d_async! ($array_type arrays)" for (array_type, device_type, gpuzeros, GPUArray) in zip(gpu_array_types, gpu_device_types, gpu_allocators, GPUArrayConstructors) init_global_grid(nx, ny, nz; quiet=true, init_MPI=false, device_type=device_type); P = zeros(nx, ny, nz ); P .= [iz*1e2 + iy*1e1 + ix for ix=1:size(P,1), iy=1:size(P,2), iz=1:size(P,3)]; P = GPUArray(P); halowidths = (1,3,1) if array_type == "CUDA" # (dim=1) dim = 1; P2 = gpuzeros(eltype(P),size(P)); buf = zeros(halowidths[dim], size(P,2), size(P,3)); buf_d, buf_h = GG.register(CuArray,buf); ranges = [2:2, 1:size(P,2), 1:size(P,3)]; nthreads = (1, 1, 1); halosize = [r[end] - r[1] + 1 for r in ranges]; nblocks = Tuple(ceil.(Int, halosize./nthreads)); @cuda blocks=nblocks threads=nthreads GG.write_d2x!(buf_d, P, ranges[1], ranges[2], ranges[3], dim); CUDA.synchronize(); @test all(buf[:] .== Array(P[ranges[1],ranges[2],ranges[3]][:])) @cuda blocks=nblocks threads=nthreads GG.read_x2d!(buf_d, P2, ranges[1], ranges[2], ranges[3], dim); CUDA.synchronize(); @test all(buf[:] .== Array(P2[ranges[1],ranges[2],ranges[3]][:])) buf .= 0.0; P2 .= 0.0; custream = stream(); GG.write_d2h_async!(buf, P, ranges, custream); CUDA.synchronize(); @test all(buf[:] .== Array(P[ranges[1],ranges[2],ranges[3]][:])) GG.read_h2d_async!(buf, P2, ranges, custream); CUDA.synchronize(); @test all(buf[:] .== Array(P2[ranges[1],ranges[2],ranges[3]][:])) CUDA.Mem.unregister(buf_h); # (dim=2) dim = 2; P2 = gpuzeros(eltype(P),size(P)); buf = zeros(size(P,1), halowidths[dim], size(P,3)); buf_d, buf_h = GG.register(CuArray,buf); ranges = [1:size(P,1), 2:4, 1:size(P,3)]; nthreads = (1, 1, 1); halosize = [r[end] - r[1] + 1 for r in ranges]; nblocks = Tuple(ceil.(Int, halosize./nthreads)); @cuda blocks=nblocks threads=nthreads GG.write_d2x!(buf_d, P, ranges[1], ranges[2], ranges[3], dim); CUDA.synchronize(); @test all(buf[:] .== Array(P[ranges[1],ranges[2],ranges[3]][:])) @cuda blocks=nblocks threads=nthreads GG.read_x2d!(buf_d, P2, ranges[1], ranges[2], ranges[3], dim); CUDA.synchronize(); @test all(buf[:] .== Array(P2[ranges[1],ranges[2],ranges[3]][:])) buf .= 0.0; P2 .= 0.0; custream = stream(); GG.write_d2h_async!(buf, P, ranges, custream); CUDA.synchronize(); @test all(buf[:] .== Array(P[ranges[1],ranges[2],ranges[3]][:])) GG.read_h2d_async!(buf, P2, ranges, custream); CUDA.synchronize(); @test all(buf[:] .== Array(P2[ranges[1],ranges[2],ranges[3]][:])) CUDA.Mem.unregister(buf_h); # (dim=3) dim = 3 P2 = gpuzeros(eltype(P),size(P)); buf = zeros(size(P,1), size(P,2), halowidths[dim]); buf_d, buf_h = GG.register(CuArray,buf); ranges = [1:size(P,1), 1:size(P,2), 4:4]; nthreads = (1, 1, 1); halosize = [r[end] - r[1] + 1 for r in ranges]; nblocks = Tuple(ceil.(Int, halosize./nthreads)); @cuda blocks=nblocks threads=nthreads GG.write_d2x!(buf_d, P, ranges[1], ranges[2], ranges[3], dim); CUDA.synchronize(); @test all(buf[:] .== Array(P[ranges[1],ranges[2],ranges[3]][:])) @cuda blocks=nblocks threads=nthreads GG.read_x2d!(buf_d, P2, ranges[1], ranges[2], ranges[3], dim); CUDA.synchronize(); @test all(buf[:] .== Array(P2[ranges[1],ranges[2],ranges[3]][:])) buf .= 0.0; P2 .= 0.0; custream = stream(); GG.write_d2h_async!(buf, P, ranges, custream); CUDA.synchronize(); @test all(buf[:] .== Array(P[ranges[1],ranges[2],ranges[3]][:])) GG.read_h2d_async!(buf, P2, ranges, custream); CUDA.synchronize(); @test all(buf[:] .== Array(P2[ranges[1],ranges[2],ranges[3]][:])) CUDA.Mem.unregister(buf_h); elseif array_type == "AMDGPU" # (dim=1) dim = 1; P2 = gpuzeros(eltype(P),size(P)); buf = zeros(halowidths[dim], size(P,2), size(P,3)); buf_d = GG.register(ROCArray,buf); ranges = [2:2, 1:size(P,2), 1:size(P,3)]; nthreads = (1, 1, 1); halosize = [r[end] - r[1] + 1 for r in ranges]; nblocks = Tuple(ceil.(Int, halosize./nthreads)); @roc gridsize=nblocks groupsize=nthreads GG.write_d2x!(buf_d, P, ranges[1], ranges[2], ranges[3], dim); AMDGPU.synchronize(); @test all(buf[:] .== Array(P[ranges[1],ranges[2],ranges[3]][:])) @roc gridsize=nblocks groupsize=nthreads GG.read_x2d!(buf_d, P2, ranges[1], ranges[2], ranges[3], dim); AMDGPU.synchronize(); @test all(buf[:] .== Array(P2[ranges[1],ranges[2],ranges[3]][:])) # buf .= 0.0; # DEBUG: diabling read_x2x_async! tests for now in AMDGPU backend because there is an issue most likely in HIP # P2 .= 0.0; # rocstream = AMDGPU.HIPStream(); # GG.write_d2h_async!(buf, P, ranges, rocstream); AMDGPU.synchronize(); # @test all(buf[:] .== Array(P[ranges[1],ranges[2],ranges[3]][:])) # GG.read_h2d_async!(buf, P2, ranges, rocstream); AMDGPU.synchronize(); # @test all(buf[:] .== Array(P2[ranges[1],ranges[2],ranges[3]][:])) # AMDGPU.unsafe_free!(buf_d); # (dim=2) dim = 2; P2 = gpuzeros(eltype(P),size(P)); buf = zeros(size(P,1), halowidths[dim], size(P,3)); buf_d = GG.register(ROCArray,buf); ranges = [1:size(P,1), 2:4, 1:size(P,3)]; nthreads = (1, 1, 1); halosize = [r[end] - r[1] + 1 for r in ranges]; nblocks = Tuple(ceil.(Int, halosize./nthreads)); @roc gridsize=nblocks groupsize=nthreads GG.write_d2x!(buf_d, P, ranges[1], ranges[2], ranges[3], dim); AMDGPU.synchronize(); @test all(buf[:] .== Array(P[ranges[1],ranges[2],ranges[3]][:])) @roc gridsize=nblocks groupsize=nthreads GG.read_x2d!(buf_d, P2, ranges[1], ranges[2], ranges[3], dim); AMDGPU.synchronize(); @test all(buf[:] .== Array(P2[ranges[1],ranges[2],ranges[3]][:])) # buf .= 0.0; # DEBUG: diabling read_x2x_async! tests for now in AMDGPU backend because there is an issue most likely in HIP # P2 .= 0.0; # rocstream = AMDGPU.HIPStream(); # GG.write_d2h_async!(buf, P, ranges, rocstream); AMDGPU.synchronize(); # @test all(buf[:] .== Array(P[ranges[1],ranges[2],ranges[3]][:])) # GG.read_h2d_async!(buf, P2, ranges, rocstream); AMDGPU.synchronize(); # @test all(buf[:] .== Array(P2[ranges[1],ranges[2],ranges[3]][:])) # AMDGPU.unsafe_free!(buf_d); # (dim=3) dim = 3 P2 = gpuzeros(eltype(P),size(P)); buf = zeros(size(P,1), size(P,2), halowidths[dim]); buf_d = GG.register(ROCArray,buf); ranges = [1:size(P,1), 1:size(P,2), 4:4]; nthreads = (1, 1, 1); halosize = [r[end] - r[1] + 1 for r in ranges]; nblocks = Tuple(ceil.(Int, halosize./nthreads)); @roc gridsize=nblocks groupsize=nthreads GG.write_d2x!(buf_d, P, ranges[1], ranges[2], ranges[3], dim); AMDGPU.synchronize(); @test all(buf[:] .== Array(P[ranges[1],ranges[2],ranges[3]][:])) @roc gridsize=nblocks groupsize=nthreads GG.read_x2d!(buf_d, P2, ranges[1], ranges[2], ranges[3], dim); AMDGPU.synchronize(); @test all(buf[:] .== Array(P2[ranges[1],ranges[2],ranges[3]][:])) # buf .= 0.0; # DEBUG: diabling read_x2x_async! tests for now in AMDGPU backend because there is an issue most likely in HIP # P2 .= 0.0; # rocstream = AMDGPU.HIPStream(); # GG.write_d2h_async!(buf, P, ranges, rocstream); AMDGPU.synchronize(); # @test all(buf[:] .== Array(P[ranges[1],ranges[2],ranges[3]][:])) # GG.read_h2d_async!(buf, P2, ranges, rocstream); AMDGPU.synchronize(); # @test all(buf[:] .== Array(P2[ranges[1],ranges[2],ranges[3]][:])) # AMDGPU.unsafe_free!(buf_d); end finalize_global_grid(finalize_MPI=false); end; end @testset "iwrite_sendbufs! ($array_type arrays)" for (array_type, device_type, zeros, Array) in zip(array_types, device_types, allocators, ArrayConstructors) init_global_grid(nx, ny, nz; periodx=1, periody=1, periodz=1, overlaps=(4,2,3), halowidths=(2,1,1), quiet=true, init_MPI=false, device_type=device_type); P = zeros(nx, ny, nz ); A = zeros(nx-1,ny+2,nz+1); P .= Array([iz*1e2 + iy*1e1 + ix for ix=1:size(P,1), iy=1:size(P,2), iz=1:size(P,3)]); A .= Array([iz*1e2 + iy*1e1 + ix for ix=1:size(A,1), iy=1:size(A,2), iz=1:size(A,3)]); P, A = GG.wrap_field.((P, A)); GG.allocate_bufs(P, A); if (array_type == "CUDA") GG.allocate_custreams(P, A); elseif (array_type == "AMDGPU") GG.allocate_rocstreams(P, A); else GG.allocate_tasks(P, A); end dim = 1 n = 1 GG.iwrite_sendbufs!(n, dim, P, 1); GG.iwrite_sendbufs!(n, dim, A, 2); GG.wait_iwrite(n, P, 1); GG.wait_iwrite(n, A, 2); if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) @test all(CPUArray(GG.gpusendbuf_flat(n,dim,1,P) .== Array(P.A[3:4,:,:][:]))) # DEBUG: here and later, CPUArray is needed to avoid error in AMDGPU because of mapreduce @test all(CPUArray(GG.gpusendbuf_flat(n,dim,2,A) .== 0.0)) else @test all(GG.sendbuf_flat(n,dim,1,P) .== CPUArray(P.A[3:4,:,:][:])) @test all(GG.sendbuf_flat(n,dim,2,A) .== 0.0) end n = 2 GG.iwrite_sendbufs!(n, dim, P, 1); GG.iwrite_sendbufs!(n, dim, A, 2); GG.wait_iwrite(n, P, 1); GG.wait_iwrite(n, A, 2); if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) @test all(CPUArray(GG.gpusendbuf_flat(n,dim,1,P) .== Array(P.A[end-3:end-2,:,:][:]))) @test all(CPUArray(GG.gpusendbuf_flat(n,dim,2,A) .== 0.0)) else @test all(GG.sendbuf_flat(n,dim,1,P) .== CPUArray(P.A[end-3:end-2,:,:][:])) @test all(GG.sendbuf_flat(n,dim,2,A) .== 0.0) end dim = 2 n = 1 GG.iwrite_sendbufs!(n, dim, P, 1); GG.iwrite_sendbufs!(n, dim, A, 2); GG.wait_iwrite(n, P, 1); GG.wait_iwrite(n, A, 2); if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) @test all(CPUArray(GG.gpusendbuf_flat(n,dim,1,P) .== Array(P.A[:,2,:][:]))) @test all(CPUArray(GG.gpusendbuf_flat(n,dim,2,A) .== Array(A.A[:,4,:][:]))) else @test all(GG.sendbuf_flat(n,dim,1,P) .== CPUArray(P.A[:,2,:][:])) @test all(GG.sendbuf_flat(n,dim,2,A) .== CPUArray(A.A[:,4,:][:])) end n = 2 GG.iwrite_sendbufs!(n, dim, P, 1); GG.iwrite_sendbufs!(n, dim, A, 2); GG.wait_iwrite(n, P, 1); GG.wait_iwrite(n, A, 2); if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) @test all(CPUArray(GG.gpusendbuf_flat(n,dim,1,P) .== Array(P.A[:,end-1,:][:]))) @test all(CPUArray(GG.gpusendbuf_flat(n,dim,2,A) .== Array(A.A[:,end-3,:][:]))) else @test all(GG.sendbuf_flat(n,dim,1,P) .== CPUArray(P.A[:,end-1,:][:])) @test all(GG.sendbuf_flat(n,dim,2,A) .== CPUArray(A.A[:,end-3,:][:])) end dim = 3 n = 1 GG.iwrite_sendbufs!(n, dim, P, 1); GG.iwrite_sendbufs!(n, dim, A, 2); GG.wait_iwrite(n, P, 1); GG.wait_iwrite(n, A, 2); if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) @test all(CPUArray(GG.gpusendbuf_flat(n,dim,1,P) .== Array(P.A[:,:,3][:]))) @test all(CPUArray(GG.gpusendbuf_flat(n,dim,2,A) .== Array(A.A[:,:,4][:]))) else @test all(GG.sendbuf_flat(n,dim,1,P) .== CPUArray(P.A[:,:,3][:])) @test all(GG.sendbuf_flat(n,dim,2,A) .== CPUArray(A.A[:,:,4][:])) end n = 2 GG.iwrite_sendbufs!(n, dim, P, 1); GG.iwrite_sendbufs!(n, dim, A, 2); GG.wait_iwrite(n, P, 1); GG.wait_iwrite(n, A, 2); if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) @test all(CPUArray(GG.gpusendbuf_flat(n,dim,1,P) .== Array(P.A[:,:,end-2][:]))) @test all(CPUArray(GG.gpusendbuf_flat(n,dim,2,A) .== Array(A.A[:,:,end-3][:]))) else @test all(GG.sendbuf_flat(n,dim,1,P) .== CPUArray(P.A[:,:,end-2][:])) @test all(GG.sendbuf_flat(n,dim,2,A) .== CPUArray(A.A[:,:,end-3][:])) end finalize_global_grid(finalize_MPI=false); end; @testset "iread_recvbufs! ($array_type arrays)" for (array_type, device_type, zeros, Array) in zip(array_types, device_types, allocators, ArrayConstructors) init_global_grid(nx, ny, nz; periodx=1, periody=1, periodz=1, overlaps=(4,2,3), halowidths=(2,1,1), quiet=true, init_MPI=false, device_type=device_type); P = zeros(nx, ny, nz ); A = zeros(nx-1,ny+2,nz+1); P, A = GG.wrap_field.((P, A)); GG.allocate_bufs(P, A); if (array_type == "CUDA") GG.allocate_custreams(P, A); elseif (array_type == "AMDGPU") GG.allocate_rocstreams(P, A); else GG.allocate_tasks(P, A); end dim = 1 for n = 1:nneighbors_per_dim if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) GG.gpurecvbuf_flat(n,dim,1,P) .= dim*1e2 + n*1e1 + 1; GG.gpurecvbuf_flat(n,dim,2,A) .= dim*1e2 + n*1e1 + 2; else GG.recvbuf_flat(n,dim,1,P) .= dim*1e2 + n*1e1 + 1; GG.recvbuf_flat(n,dim,2,A) .= dim*1e2 + n*1e1 + 2; end end n = 1 GG.iread_recvbufs!(n, dim, P, 1); GG.iread_recvbufs!(n, dim, A, 2); GG.wait_iread(n, P, 1); GG.wait_iread(n, A, 2); if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) @test all(CPUArray(GG.gpurecvbuf_flat(n,dim,1,P) .== Array(P.A[1:2,:,:][:]))) @test all(CPUArray( 0.0 .== Array(A.A[1:2,:,:][:]))) else @test all(GG.recvbuf_flat(n,dim,1,P) .== CPUArray(P.A[1:2,:,:][:])) @test all( 0.0 .== CPUArray(A.A[1:2,:,:][:])) end n = 2 GG.iread_recvbufs!(n, dim, P, 1); GG.iread_recvbufs!(n, dim, A, 2); GG.wait_iread(n, P, 1); GG.wait_iread(n, A, 2); if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) @test all(CPUArray(GG.gpurecvbuf_flat(n,dim,1,P) .== Array(P.A[end-1:end,:,:][:]))) @test all(CPUArray( 0.0 .== Array(A.A[end-1:end,:,:][:]))) else @test all(GG.recvbuf_flat(n,dim,1,P) .== CPUArray(P.A[end-1:end,:,:][:])) @test all( 0.0 .== CPUArray(A.A[end-1:end,:,:][:])) end dim = 2 for n = 1:nneighbors_per_dim if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) GG.gpurecvbuf_flat(n,dim,1,P) .= dim*1e2 + n*1e1 + 1; GG.gpurecvbuf_flat(n,dim,2,A) .= dim*1e2 + n*1e1 + 2; else GG.recvbuf_flat(n,dim,1,P) .= dim*1e2 + n*1e1 + 1; GG.recvbuf_flat(n,dim,2,A) .= dim*1e2 + n*1e1 + 2; end end n = 1 GG.iread_recvbufs!(n, dim, P, 1); GG.iread_recvbufs!(n, dim, A, 2); GG.wait_iread(n, P, 1); GG.wait_iread(n, A, 2); if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) @test all(CPUArray(GG.gpurecvbuf_flat(n,dim,1,P) .== Array(P.A[:,1,:][:]))) @test all(CPUArray(GG.gpurecvbuf_flat(n,dim,2,A) .== Array(A.A[:,1,:][:]))) else @test all(GG.recvbuf_flat(n,dim,1,P) .== CPUArray(P.A[:,1,:][:])) @test all(GG.recvbuf_flat(n,dim,2,A) .== CPUArray(A.A[:,1,:][:])) end n = 2 GG.iread_recvbufs!(n, dim, P, 1); GG.iread_recvbufs!(n, dim, A, 2); GG.wait_iread(n, P, 1); GG.wait_iread(n, A, 2); if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) @test all(CPUArray(GG.gpurecvbuf_flat(n,dim,1,P) .== Array(P.A[:,end,:][:]))) @test all(CPUArray(GG.gpurecvbuf_flat(n,dim,2,A) .== Array(A.A[:,end,:][:]))) else @test all(GG.recvbuf_flat(n,dim,1,P) .== CPUArray(P.A[:,end,:][:])) @test all(GG.recvbuf_flat(n,dim,2,A) .== CPUArray(A.A[:,end,:][:])) end dim = 3 for n = 1:nneighbors_per_dim if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) GG.gpurecvbuf_flat(n,dim,1,P) .= dim*1e2 + n*1e1 + 1; GG.gpurecvbuf_flat(n,dim,2,A) .= dim*1e2 + n*1e1 + 2; else GG.recvbuf_flat(n,dim,1,P) .= dim*1e2 + n*1e1 + 1; GG.recvbuf_flat(n,dim,2,A) .= dim*1e2 + n*1e1 + 2; end end n = 1 GG.iread_recvbufs!(n, dim, P, 1); GG.iread_recvbufs!(n, dim, A, 2); GG.wait_iread(n, P, 1); GG.wait_iread(n, A, 2); if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) @test all(CPUArray(GG.gpurecvbuf_flat(n,dim,1,P) .== Array(P.A[:,:,1][:]))) @test all(CPUArray(GG.gpurecvbuf_flat(n,dim,2,A) .== Array(A.A[:,:,1][:]))) else @test all(GG.recvbuf_flat(n,dim,1,P) .== CPUArray(P.A[:,:,1][:])) @test all(GG.recvbuf_flat(n,dim,2,A) .== CPUArray(A.A[:,:,1][:])) end n = 2 GG.iread_recvbufs!(n, dim, P, 1); GG.iread_recvbufs!(n, dim, A, 2); GG.wait_iread(n, P, 1); GG.wait_iread(n, A, 2); if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) @test all(CPUArray(GG.gpurecvbuf_flat(n,dim,1,P) .== Array(P.A[:,:,end][:]))) @test all(CPUArray(GG.gpurecvbuf_flat(n,dim,2,A) .== Array(A.A[:,:,end][:]))) else @test all(GG.recvbuf_flat(n,dim,1,P) .== CPUArray(P.A[:,:,end][:])) @test all(GG.recvbuf_flat(n,dim,2,A) .== CPUArray(A.A[:,:,end][:])) end finalize_global_grid(finalize_MPI=false); end; if (nprocs==1) @testset "sendrecv_halo_local ($array_type arrays)" for (array_type, device_type, zeros) in zip(array_types, device_types, allocators) init_global_grid(nx, ny, nz; periodx=1, periody=1, periodz=1, overlaps=(4,2,3), halowidths=(2,1,1), quiet=true, init_MPI=false, device_type=device_type); P = zeros(nx, ny, nz ); A = zeros(nx-1,ny+2,nz+1); P, A = GG.wrap_field.((P, A)); GG.allocate_bufs(P, A); dim = 1 for n = 1:nneighbors_per_dim if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) GG.gpusendbuf_flat(n,dim,1,P) .= dim*1e2 + n*1e1 + 1; GG.gpusendbuf_flat(n,dim,2,A) .= dim*1e2 + n*1e1 + 2; else GG.sendbuf_flat(n,dim,1,P) .= dim*1e2 + n*1e1 + 1; GG.sendbuf_flat(n,dim,2,A) .= dim*1e2 + n*1e1 + 2; end end for n = 1:nneighbors_per_dim GG.sendrecv_halo_local(n, dim, P, 1); GG.sendrecv_halo_local(n, dim, A, 2); end if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) @test all(CPUArray(GG.gpurecvbuf_flat(1,dim,1,P) .== GG.gpusendbuf_flat(2,dim,1,P))); @test all(CPUArray(GG.gpurecvbuf_flat(1,dim,2,A) .== 0.0)); # There is no halo (ol(dim,A) < 2). @test all(CPUArray(GG.gpurecvbuf_flat(2,dim,1,P) .== GG.gpusendbuf_flat(1,dim,1,P))); @test all(CPUArray(GG.gpurecvbuf_flat(2,dim,2,A) .== 0.0)); # There is no halo (ol(dim,A) < 2). else @test all(GG.recvbuf_flat(1,dim,1,P) .== GG.sendbuf_flat(2,dim,1,P)); @test all(GG.recvbuf_flat(1,dim,2,A) .== 0.0); # There is no halo (ol(dim,A) < 2). @test all(GG.recvbuf_flat(2,dim,1,P) .== GG.sendbuf_flat(1,dim,1,P)); @test all(GG.recvbuf_flat(2,dim,2,A) .== 0.0); # There is no halo (ol(dim,A) < 2). end dim = 2 for n = 1:nneighbors_per_dim if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) GG.gpusendbuf_flat(n,dim,1,P) .= dim*1e2 + n*1e1 + 1; GG.gpusendbuf_flat(n,dim,2,A) .= dim*1e2 + n*1e1 + 2; else GG.sendbuf_flat(n,dim,1,P) .= dim*1e2 + n*1e1 + 1; GG.sendbuf_flat(n,dim,2,A) .= dim*1e2 + n*1e1 + 2; end end for n = 1:nneighbors_per_dim GG.sendrecv_halo_local(n, dim, P, 1); GG.sendrecv_halo_local(n, dim, A, 2); end if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) @test all(CPUArray(GG.gpurecvbuf_flat(1,dim,1,P) .== GG.gpusendbuf_flat(2,dim,1,P))); @test all(CPUArray(GG.gpurecvbuf_flat(1,dim,2,A) .== GG.gpusendbuf_flat(2,dim,2,A))); @test all(CPUArray(GG.gpurecvbuf_flat(2,dim,1,P) .== GG.gpusendbuf_flat(1,dim,1,P))); @test all(CPUArray(GG.gpurecvbuf_flat(2,dim,2,A) .== GG.gpusendbuf_flat(1,dim,2,A))); else @test all(GG.recvbuf_flat(1,dim,1,P) .== GG.sendbuf_flat(2,dim,1,P)); @test all(GG.recvbuf_flat(1,dim,2,A) .== GG.sendbuf_flat(2,dim,2,A)); @test all(GG.recvbuf_flat(2,dim,1,P) .== GG.sendbuf_flat(1,dim,1,P)); @test all(GG.recvbuf_flat(2,dim,2,A) .== GG.sendbuf_flat(1,dim,2,A)); end dim = 3 for n = 1:nneighbors_per_dim if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) GG.gpusendbuf_flat(n,dim,1,P) .= dim*1e2 + n*1e1 + 1; GG.gpusendbuf_flat(n,dim,2,A) .= dim*1e2 + n*1e1 + 2; else GG.sendbuf_flat(n,dim,1,P) .= dim*1e2 + n*1e1 + 1; GG.sendbuf_flat(n,dim,2,A) .= dim*1e2 + n*1e1 + 2; end end for n = 1:nneighbors_per_dim GG.sendrecv_halo_local(n, dim, P, 1); GG.sendrecv_halo_local(n, dim, A, 2); end if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) @test all(CPUArray(GG.gpurecvbuf_flat(1,dim,1,P) .== GG.gpusendbuf_flat(2,dim,1,P))); @test all(CPUArray(GG.gpurecvbuf_flat(1,dim,2,A) .== GG.gpusendbuf_flat(2,dim,2,A))); @test all(CPUArray(GG.gpurecvbuf_flat(2,dim,1,P) .== GG.gpusendbuf_flat(1,dim,1,P))); @test all(CPUArray(GG.gpurecvbuf_flat(2,dim,2,A) .== GG.gpusendbuf_flat(1,dim,2,A))); else @test all(GG.recvbuf_flat(1,dim,1,P) .== GG.sendbuf_flat(2,dim,1,P)); @test all(GG.recvbuf_flat(1,dim,2,A) .== GG.sendbuf_flat(2,dim,2,A)); @test all(GG.recvbuf_flat(2,dim,1,P) .== GG.sendbuf_flat(1,dim,1,P)); @test all(GG.recvbuf_flat(2,dim,2,A) .== GG.sendbuf_flat(1,dim,2,A)); end finalize_global_grid(finalize_MPI=false); end end end; if (nprocs>1) @testset "irecv_halo! / isend_halo ($array_type arrays)" for (array_type, device_type, zeros) in zip(array_types, device_types, allocators) me, dims, nprocs, coords, comm = init_global_grid(nx, ny, nz; dimy=1, dimz=1, periodx=1, overlaps=(4,4,4), halowidths=(2,1,2), quiet=true, init_MPI=false, device_type=device_type); P = zeros(nx,ny,nz); A = zeros(nx,ny,nz); P, A = GG.wrap_field.((P, A)); dim = 1; GG.allocate_bufs(P, A); for n = 1:nneighbors_per_dim if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) GG.gpusendbuf(n,dim,1,P) .= 9.0; GG.gpurecvbuf(n,dim,1,P) .= 0; GG.gpusendbuf(n,dim,2,A) .= 9.0; GG.gpurecvbuf(n,dim,2,A) .= 0; else GG.sendbuf(n,dim,1,P) .= 9.0; GG.recvbuf(n,dim,1,P) .= 0; GG.sendbuf(n,dim,2,A) .= 9.0; GG.recvbuf(n,dim,2,A) .= 0; end end # DEBUG: Filling arrays is async (at least on AMDGPU); sync is needed. if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) CUDA.synchronize() elseif (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) AMDGPU.synchronize() end reqs = fill(MPI.REQUEST_NULL, 2, nneighbors_per_dim, 2); for n = 1:nneighbors_per_dim reqs[1,n,1] = GG.irecv_halo!(n, dim, P, 1); reqs[2,n,1] = GG.irecv_halo!(n, dim, A, 2); reqs[1,n,2] = GG.isend_halo(n, dim, P, 1); reqs[2,n,2] = GG.isend_halo(n, dim, A, 2); end @test all(reqs .!= [MPI.REQUEST_NULL]) MPI.Waitall!(reqs[:]); for n = 1:nneighbors_per_dim if (array_type=="CUDA" && GG.cudaaware_MPI(dim)) || (array_type=="AMDGPU" && GG.amdgpuaware_MPI(dim)) @test all(CPUArray(GG.gpurecvbuf(n,dim,1,P) .== 9.0)) @test all(CPUArray(GG.gpurecvbuf(n,dim,2,A) .== 9.0)) else @test all(GG.recvbuf(n,dim,1,P) .== 9.0) @test all(GG.recvbuf(n,dim,2,A) .== 9.0) end end finalize_global_grid(finalize_MPI=false); end; end end; # (Backup field filled with encoded coordinates and set boundary to zeros; then update halo and compare with backuped field; it should be the same again, except for the boundaries that are not halos) @testset "4. halo update ($array_type arrays)" for (array_type, device_type, Array) in zip(array_types, device_types, ArrayConstructors) @testset "basic grid (default: periodic)" begin @testset "1D" begin init_global_grid(nx, 1, 1; periodx=1, quiet=true, init_MPI=false, device_type=device_type); P = zeros(nx); P .= [x_g(ix,dx,P) for ix=1:size(P,1)]; P_ref = copy(P); P[[1, end]] .= 0.0; P = Array(P); P_ref = Array(P_ref); @require !all(CPUArray(P .== P_ref)) # DEBUG: CPUArray needed here and onwards as mapreduce! is failing on AMDGPU (see https://github.com/JuliaGPU/AMDGPU.jl/issues/210) update_halo!(P); @test all(CPUArray(P .== P_ref)) finalize_global_grid(finalize_MPI=false); end; @testset "2D" begin init_global_grid(nx, ny, 1; periodx=1, periody=1, quiet=true, init_MPI=false, device_type=device_type); P = zeros(nx, ny); P .= [y_g(iy,dy,P)*1e1 + x_g(ix,dx,P) for ix=1:size(P,1), iy=1:size(P,2)]; P_ref = copy(P); P[[1, end], :] .= 0.0; P[ :,[1, end]] .= 0.0; P = Array(P); P_ref = Array(P_ref); @require !all(CPUArray(P .== P_ref)) update_halo!(P); @test all(CPUArray(P .== P_ref)) finalize_global_grid(finalize_MPI=false); end; @testset "3D" begin init_global_grid(nx, ny, nz; periodx=1, periody=1, periodz=1, quiet=true, init_MPI=false, device_type=device_type); P = zeros(nx, ny, nz); P .= [z_g(iz,dz,P)*1e2 + y_g(iy,dy,P)*1e1 + x_g(ix,dx,P) for ix=1:size(P,1), iy=1:size(P,2), iz=1:size(P,3)]; P_ref = copy(P); P[[1, end], :, :] .= 0.0; P[ :,[1, end], :] .= 0.0; P[ :, :,[1, end]] .= 0.0; P = Array(P); P_ref = Array(P_ref); @require !all(CPUArray(P .== P_ref)) update_halo!(P); @test all(CPUArray(P .== P_ref)) finalize_global_grid(finalize_MPI=false); end; @testset "3D (non-default overlap and halowidth)" begin init_global_grid(nx, ny, nz; periodx=1, periody=1, periodz=1, overlaps=(4,2,3), halowidths=(2,1,1), quiet=true, init_MPI=false, device_type=device_type); P = zeros(nx, ny, nz); P .= [z_g(iz,dz,P)*1e2 + y_g(iy,dy,P)*1e1 + x_g(ix,dx,P) for ix=1:size(P,1), iy=1:size(P,2), iz=1:size(P,3)]; P_ref = copy(P); P[[1,2, end-1,end], :, :] .= 0.0; P[ :,[1, end], :] .= 0.0; P[ :, :,[1, end]] .= 0.0; P = Array(P); P_ref = Array(P_ref); @require !all(CPUArray(P .== P_ref)) update_halo!(P); @test all(CPUArray(P .== P_ref)) finalize_global_grid(finalize_MPI=false); end; @testset "3D (not periodic)" begin me, dims, nprocs, coords = init_global_grid(nx, ny, nz; quiet=true, init_MPI=false, device_type=device_type); P = zeros(nx, ny, nz); P .= [z_g(iz,dz,P)*1e2 + y_g(iy,dy,P)*1e1 + x_g(ix,dx,P) for ix=1:size(P,1), iy=1:size(P,2), iz=1:size(P,3)]; P_ref = copy(P); P[[1, end], :, :] .= 0.0; P[ :,[1, end], :] .= 0.0; P[ :, :,[1, end]] .= 0.0; P = Array(P); P_ref = Array(P_ref); @require !all(CPUArray(P .== P_ref)) update_halo!(P); @test all(CPUArray(P[2:end-1,2:end-1,2:end-1] .== P_ref[2:end-1,2:end-1,2:end-1])) if (coords[1] == 0) @test all(CPUArray(P[ 1, :, :] .== 0.0)); else @test all(CPUArray(P[ 1,2:end-1,2:end-1] .== P_ref[ 1,2:end-1,2:end-1])); end # Verifcation of corner values would be cumbersome here; it is already sufficiently covered in the periodic tests. if (coords[1] == dims[1]-1) @test all(CPUArray(P[end, :, :] .== 0.0)); else @test all(CPUArray(P[ end,2:end-1,2:end-1] .== P_ref[ end,2:end-1,2:end-1])); end if (coords[2] == 0) @test all(CPUArray(P[ :, 1, :] .== 0.0)); else @test all(CPUArray(P[2:end-1, 1,2:end-1] .== P_ref[2:end-1, 1,2:end-1])); end if (coords[2] == dims[2]-1) @test all(CPUArray(P[ :,end, :] .== 0.0)); else @test all(CPUArray(P[2:end-1, end,2:end-1] .== P_ref[2:end-1, end,2:end-1])); end if (coords[3] == 0) @test all(CPUArray(P[ :, :, 1] .== 0.0)); else @test all(CPUArray(P[2:end-1,2:end-1, 1] .== P_ref[2:end-1,2:end-1, 1])); end if (coords[3] == dims[3]-1) @test all(CPUArray(P[ :, :,end] .== 0.0)); else @test all(CPUArray(P[2:end-1,2:end-1, end] .== P_ref[2:end-1,2:end-1, end])); end finalize_global_grid(finalize_MPI=false); end; end; @testset "staggered grid (default: periodic)" begin @testset "1D" begin init_global_grid(nx, 1, 1; periodx=1, quiet=true, init_MPI=false, device_type=device_type); Vx = zeros(nx+1); Vx .= [x_g(ix,dx,Vx) for ix=1:size(Vx,1)]; Vx_ref = copy(Vx); Vx[[1, end]] .= 0.0; Vx = Array(Vx); Vx_ref = Array(Vx_ref); @require !all(CPUArray(Vx .== Vx_ref)) update_halo!(Vx); @test all(CPUArray(Vx .== Vx_ref)) finalize_global_grid(finalize_MPI=false); end; @testset "2D" begin init_global_grid(nx, ny, 1; periodx=1, periody=1, quiet=true, init_MPI=false, device_type=device_type); Vy = zeros(nx,ny+1); Vy .= [y_g(iy,dy,Vy)*1e1 + x_g(ix,dx,Vy) for ix=1:size(Vy,1), iy=1:size(Vy,2)]; Vy_ref = copy(Vy); Vy[[1, end], :] .= 0.0; Vy[ :,[1, end]] .= 0.0; Vy = Array(Vy); Vy_ref = Array(Vy_ref); @require !all(CPUArray(Vy .== Vy_ref)) update_halo!(Vy); @test all(CPUArray(Vy .== Vy_ref)) finalize_global_grid(finalize_MPI=false); end; @testset "3D" begin init_global_grid(nx, ny, nz; periodx=1, periody=1, periodz=1, quiet=true, init_MPI=false, device_type=device_type); Vz = zeros(nx,ny,nz+1); Vz .= [z_g(iz,dz,Vz)*1e2 + y_g(iy,dy,Vz)*1e1 + x_g(ix,dx,Vz) for ix=1:size(Vz,1), iy=1:size(Vz,2), iz=1:size(Vz,3)]; Vz_ref = copy(Vz); Vz[[1, end], :, :] .= 0.0; Vz[ :,[1, end], :] .= 0.0; Vz[ :, :,[1, end]] .= 0.0; Vz = Array(Vz); Vz_ref = Array(Vz_ref); @require !all(CPUArray(Vz .== Vz_ref)) update_halo!(Vz); @test all(CPUArray(Vz .== Vz_ref)) finalize_global_grid(finalize_MPI=false); end; @testset "3D (non-default overlap and halowidth)" begin init_global_grid(nx, ny, nz; periodx=1, periody=1, periodz=1, overlaps=(4,2,3), halowidths=(2,1,1), quiet=true, init_MPI=false, device_type=device_type); Vx = zeros(nx+1,ny,nz); Vx .= [z_g(iz,dz,Vx)*1e2 + y_g(iy,dy,Vx)*1e1 + x_g(ix,dx,Vx) for ix=1:size(Vx,1), iy=1:size(Vx,2), iz=1:size(Vx,3)]; Vx_ref = copy(Vx); Vx[[1,2, end-1,end], :, :] .= 0.0; Vx[ :,[1, end], :] .= 0.0; Vx[ :, :,[1, end]] .= 0.0; Vx = Array(Vx); Vx_ref = Array(Vx_ref); @require !all(CPUArray(Vx .== Vx_ref)) update_halo!(Vx); @test all(CPUArray(Vx .== Vx_ref)) finalize_global_grid(finalize_MPI=false); end; @testset "3D (not periodic)" begin me, dims, nprocs, coords = init_global_grid(nx, ny, nz; quiet=true, init_MPI=false, device_type=device_type); Vz = zeros(nx,ny,nz+1); Vz .= [z_g(iz,dz,Vz)*1e2 + y_g(iy,dy,Vz)*1e1 + x_g(ix,dx,Vz) for ix=1:size(Vz,1), iy=1:size(Vz,2), iz=1:size(Vz,3)]; Vz_ref = copy(Vz); Vz[[1, end], :, :] .= 0.0; Vz[ :,[1, end], :] .= 0.0; Vz[ :, :,[1, end]] .= 0.0; Vz = Array(Vz); Vz_ref = Array(Vz_ref); @require !all(CPUArray(Vz .== Vz_ref)) update_halo!(Vz); @test all(CPUArray(Vz[2:end-1,2:end-1,2:end-1] .== Vz_ref[2:end-1,2:end-1,2:end-1])) if (coords[1] == 0) @test all(CPUArray(Vz[ 1, :, :] .== 0.0)); else @test all(CPUArray(Vz[ 1,2:end-1,2:end-1] .== Vz_ref[ 1,2:end-1,2:end-1])); end # Verifcation of corner values would be cumbersome here; it is already sufficiently covered in the periodic tests. if (coords[1] == dims[1]-1) @test all(CPUArray(Vz[end, :, :] .== 0.0)); else @test all(CPUArray(Vz[ end,2:end-1,2:end-1] .== Vz_ref[ end,2:end-1,2:end-1])); end if (coords[2] == 0) @test all(CPUArray(Vz[ :, 1, :] .== 0.0)); else @test all(CPUArray(Vz[2:end-1, 1,2:end-1] .== Vz_ref[2:end-1, 1,2:end-1])); end if (coords[2] == dims[2]-1) @test all(CPUArray(Vz[ :,end, :] .== 0.0)); else @test all(CPUArray(Vz[2:end-1, end,2:end-1] .== Vz_ref[2:end-1, end,2:end-1])); end if (coords[3] == 0) @test all(CPUArray(Vz[ :, :, 1] .== 0.0)); else @test all(CPUArray(Vz[2:end-1,2:end-1, 1] .== Vz_ref[2:end-1,2:end-1, 1])); end if (coords[3] == dims[3]-1) @test all(CPUArray(Vz[ :, :,end] .== 0.0)); else @test all(CPUArray(Vz[2:end-1,2:end-1, end] .== Vz_ref[2:end-1,2:end-1, end])); end finalize_global_grid(finalize_MPI=false); end; @testset "2D (no halo in one dim)" begin init_global_grid(nx, ny, 1; periodx=1, periody=1, quiet=true, init_MPI=false, device_type=device_type); A = zeros(nx-1,ny+2); A .= [y_g(iy,dy,A)*1e1 + x_g(ix,dx,A) for ix=1:size(A,1), iy=1:size(A,2)]; A_ref = copy(A); A[[1, end], :] .= 0.0; A[ :,[1, end]] .= 0.0; A = Array(A); A_ref = Array(A_ref); @require !all(CPUArray(A .== A_ref)) update_halo!(A); @test all(CPUArray(A[2:end-1,:] .== A_ref[2:end-1,:])) @test all(CPUArray(A[[1, end],:] .== 0.0)) finalize_global_grid(finalize_MPI=false); end; @testset "3D (no halo in one dim)" begin init_global_grid(nx, ny, nz; periodx=1, periody=1, periodz=1, quiet=true, init_MPI=false, device_type=device_type); A = zeros(nx+2,ny-1,nz+1); A .= [z_g(iz,dz,A)*1e2 + y_g(iy,dy,A)*1e1 + x_g(ix,dx,A) for ix=1:size(A,1), iy=1:size(A,2), iz=1:size(A,3)]; A_ref = copy(A); A[[1, end], :, :] .= 0.0; A[ :,[1, end], :] .= 0.0; A[ :, :,[1, end]] .= 0.0; A = Array(A); A_ref = Array(A_ref); @require !all(CPUArray(A .== A_ref)) update_halo!(A); @test all(CPUArray(A[:,2:end-1,:] .== A_ref[:,2:end-1,:])) @test all(CPUArray(A[:,[1, end],:] .== 0.0)) finalize_global_grid(finalize_MPI=false); end; @testset "3D (Complex)" begin init_global_grid(nx, ny, nz; periodx=1, periody=1, periodz=1, quiet=true, init_MPI=false, device_type=device_type); Vz = zeros(ComplexF16,nx,ny,nz+1); Vz .= [(1+im)*(z_g(iz,dz,Vz)*1e2 + y_g(iy,dy,Vz)*1e1 + x_g(ix,dx,Vz)) for ix=1:size(Vz,1), iy=1:size(Vz,2), iz=1:size(Vz,3)]; Vz_ref = copy(Vz); Vz[[1, end], :, :] .= 0.0; Vz[ :,[1, end], :] .= 0.0; Vz[ :, :,[1, end]] .= 0.0; Vz = Array(Vz); Vz_ref = Array(Vz_ref); @require !all(CPUArray(Vz .== Vz_ref)) update_halo!(Vz); @test all(CPUArray(Vz .== Vz_ref)) finalize_global_grid(finalize_MPI=false); end; # @testset "3D (changing datatype)" begin # init_global_grid(nx, ny, nz; periodx=1, periody=1, periodz=1, quiet=true, init_MPI=false, device_type=device_type); # Vz = zeros(nx,ny,nz+1); # Vz .= [z_g(iz,dz,Vz)*1e2 + y_g(iy,dy,Vz)*1e1 + x_g(ix,dx,Vz) for ix=1:size(Vz,1), iy=1:size(Vz,2), iz=1:size(Vz,3)]; # Vz_ref = copy(Vz); # Vx = zeros(Float32,nx+1,ny,nz); # Vx .= [z_g(iz,dz,Vx)*1e2 + y_g(iy,dy,Vx)*1e1 + x_g(ix,dx,Vx) for ix=1:size(Vx,1), iy=1:size(Vx,2), iz=1:size(Vx,3)]; # Vx_ref = copy(Vx); # Vz[[1, end], :, :] .= 0.0; # Vz[ :,[1, end], :] .= 0.0; # Vz[ :, :,[1, end]] .= 0.0; # Vz = Array(Vz); # Vz_ref = Array(Vz_ref); # @require !all(Vz .== Vz_ref) # update_halo!(Vz); # @test all(Vz .== Vz_ref) # Vx[[1, end], :, :] .= 0.0; # Vx[ :,[1, end], :] .= 0.0; # Vx[ :, :,[1, end]] .= 0.0; # Vx = Array(Vx); # Vx_ref = Array(Vx_ref); # @require !all(Vx .== Vx_ref) # update_halo!(Vx); # @test all(Vx .== Vx_ref) # #TODO: added for GPU - quick fix: # Vz = zeros(nx,ny,nz+1); # Vz .= [z_g(iz,dz,Vz)*1e2 + y_g(iy,dy,Vz)*1e1 + x_g(ix,dx,Vz) for ix=1:size(Vz,1), iy=1:size(Vz,2), iz=1:size(Vz,3)]; # Vz_ref = copy(Vz); # Vz[[1, end], :, :] .= 0.0; # Vz[ :,[1, end], :] .= 0.0; # Vz[ :, :,[1, end]] .= 0.0; # Vz = Array(Vz); # Vz_ref = Array(Vz_ref); # @require !all(Vz .== Vz_ref) # update_halo!(Vz); # @test all(Vz .== Vz_ref) # finalize_global_grid(finalize_MPI=false); # end; # @testset "3D (changing datatype) (Complex)" begin # init_global_grid(nx, ny, nz; periodx=1, periody=1, periodz=1, quiet=true, init_MPI=false, device_type=device_type); # Vz = zeros(nx,ny,nz+1); # Vz .= [z_g(iz,dz,Vz)*1e2 + y_g(iy,dy,Vz)*1e1 + x_g(ix,dx,Vz) for ix=1:size(Vz,1), iy=1:size(Vz,2), iz=1:size(Vz,3)]; # Vz_ref = copy(Vz); # Vx = zeros(ComplexF64,nx+1,ny,nz); # Vx .= [(1+im)*(z_g(iz,dz,Vx)*1e2 + y_g(iy,dy,Vx)*1e1 + x_g(ix,dx,Vx)) for ix=1:size(Vx,1), iy=1:size(Vx,2), iz=1:size(Vx,3)]; # Vx_ref = copy(Vx); # Vz[[1, end], :, :] .= 0.0; # Vz[ :,[1, end], :] .= 0.0; # Vz[ :, :,[1, end]] .= 0.0; # Vz = Array(Vz); # Vz_ref = Array(Vz_ref); # @require !all(Vz .== Vz_ref) # update_halo!(Vz); # @test all(Vz .== Vz_ref) # Vx[[1, end], :, :] .= 0.0; # Vx[ :,[1, end], :] .= 0.0; # Vx[ :, :,[1, end]] .= 0.0; # Vx = Array(Vx); # Vx_ref = Array(Vx_ref); # @require !all(Vx .== Vx_ref) # update_halo!(Vx); # @test all(Vx .== Vx_ref) # #TODO: added for GPU - quick fix: # Vz = zeros(nx,ny,nz+1); # Vz .= [z_g(iz,dz,Vz)*1e2 + y_g(iy,dy,Vz)*1e1 + x_g(ix,dx,Vz) for ix=1:size(Vz,1), iy=1:size(Vz,2), iz=1:size(Vz,3)]; # Vz_ref = copy(Vz); # Vz[[1, end], :, :] .= 0.0; # Vz[ :,[1, end], :] .= 0.0; # Vz[ :, :,[1, end]] .= 0.0; # Vz = Array(Vz); # Vz_ref = Array(Vz_ref); # @require !all(Vz .== Vz_ref) # update_halo!(Vz); # @test all(Vz .== Vz_ref) # finalize_global_grid(finalize_MPI=false); # end; @testset "3D (two fields simultaneously)" begin init_global_grid(nx, ny, nz; periodx=1, periody=1, periodz=1, quiet=true, init_MPI=false, device_type=device_type); Vz = zeros(nx,ny,nz+1); Vz .= [z_g(iz,dz,Vz)*1e2 + y_g(iy,dy,Vz)*1e1 + x_g(ix,dx,Vz) for ix=1:size(Vz,1), iy=1:size(Vz,2), iz=1:size(Vz,3)]; Vz_ref = copy(Vz); Vx = zeros(nx+1,ny,nz); Vx .= [z_g(iz,dz,Vx)*1e2 + y_g(iy,dy,Vx)*1e1 + x_g(ix,dx,Vx) for ix=1:size(Vx,1), iy=1:size(Vx,2), iz=1:size(Vx,3)]; Vx_ref = copy(Vx); Vz[[1, end], :, :] .= 0.0; Vz[ :,[1, end], :] .= 0.0; Vz[ :, :,[1, end]] .= 0.0; Vx[[1, end], :, :] .= 0.0; Vx[ :,[1, end], :] .= 0.0; Vx[ :, :,[1, end]] .= 0.0; Vz = Array(Vz); Vz_ref = Array(Vz_ref); Vx = Array(Vx); Vx_ref = Array(Vx_ref); @require !all(CPUArray(Vz .== Vz_ref)) @require !all(CPUArray(Vx .== Vx_ref)) update_halo!(Vz, Vx); @test all(CPUArray(Vz .== Vz_ref)) @test all(CPUArray(Vx .== Vx_ref)) finalize_global_grid(finalize_MPI=false); end; @testset "3D (two fields simultaneously, non-default overlap and halowidth)" begin init_global_grid(nx, ny, nz; periodx=1, periody=1, periodz=1, overlaps=(4,2,3), halowidths=(2,1,1), quiet=true, init_MPI=false, device_type=device_type); Vz = zeros(nx,ny,nz+1); Vz .= [z_g(iz,dz,Vz)*1e2 + y_g(iy,dy,Vz)*1e1 + x_g(ix,dx,Vz) for ix=1:size(Vz,1), iy=1:size(Vz,2), iz=1:size(Vz,3)]; Vz_ref = copy(Vz); Vx = zeros(nx+1,ny,nz); Vx .= [z_g(iz,dz,Vx)*1e2 + y_g(iy,dy,Vx)*1e1 + x_g(ix,dx,Vx) for ix=1:size(Vx,1), iy=1:size(Vx,2), iz=1:size(Vx,3)]; Vx_ref = copy(Vx); Vz[[1,2, end-1,end], :, :] .= 0.0; Vz[ :,[1, end], :] .= 0.0; Vz[ :, :,[1, end]] .= 0.0; Vx[[1,2, end-1,end], :, :] .= 0.0; Vx[ :,[1, end], :] .= 0.0; Vx[ :, :,[1, end]] .= 0.0; Vz = Array(Vz); Vz_ref = Array(Vz_ref); Vx = Array(Vx); Vx_ref = Array(Vx_ref); @require !all(CPUArray(Vz .== Vz_ref)) @require !all(CPUArray(Vx .== Vx_ref)) update_halo!(Vz, Vx); @test all(CPUArray(Vz .== Vz_ref)) @test all(CPUArray(Vx .== Vx_ref)) finalize_global_grid(finalize_MPI=false); end; end; end; end; ## Test tear down MPI.Finalize()
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "BSD-3-Clause" ]
0.15.2
aeac55c216301a745ea67b00b6ebb6537f5e036c
docs
16495
<h1> <img src="docs/src/assets/logo.png" alt="ImplicitGlobalGrid.jl" width="50"> ImplicitGlobalGrid.jl </h1> [![CI](https://github.com/eth-cscs/ImplicitGlobalGrid.jl/workflows/CI/badge.svg?branch=master)](https://github.com/eth-cscs/ImplicitGlobalGrid.jl/actions/workflows/CI.yml?query=branch%3Amain) [![Coverage](https://codecov.io/gh/omlins/ImplicitGlobalGrid.jl/branch/main/graph/badge.svg)](https://codecov.io/gh/omlins/ImplicitGlobalGrid.jl) ImplicitGlobalGrid is an outcome of a collaboration of the Swiss National Supercomputing Centre, ETH Zurich (Dr. Samuel Omlin) with Stanford University (Dr. Ludovic Räss) and the Swiss Geocomputing Centre (Prof. Yuri Podladchikov). It renders the distributed parallelization of stencil-based GPU and CPU applications on a regular staggered grid almost trivial and enables close to ideal weak scaling of real-world applications on thousands of GPUs \[[1][JuliaCon19], [2][PASC19], [3][JuliaCon20a]\]: ![Weak scaling Piz Daint](docs/src/assets/images/fig_parEff_HM3D_Julia_CUDA_all_Daint_extrapol.png) ImplicitGlobalGrid relies on the Julia MPI wrapper ([MPI.jl]) to perform halo updates close to hardware limit and leverages CUDA-aware or ROCm-aware MPI for GPU-applications. The communication can straightforwardly be hidden behind computation \[[1][JuliaCon19], [3][JuliaCon20a]\] (how this can be done automatically when using ParallelStencil.jl is shown in \[[3][JuliaCon20a]\]; a general approach particularly suited for CUDA C applications is explained in \[[4][GTC19]\]). A particularity of ImplicitGlobalGrid is the automatic *implicit creation of the global computational grid* based on the number of processes the application is run with (and based on the process topology, which can be explicitly chosen by the user or automatically defined). As a consequence, the user only needs to write a code to solve his problem on one GPU/CPU (*local grid*); then, **as little as three functions can be enough to transform a single GPU/CPU application into a massively scaling Multi-GPU/CPU application**. See the [example](#multi-gpu-with-three-functions) below. 1-D, 2-D and 3-D grids are supported. Here is a sketch of the global grid that results from running a 2-D solver with 4 processes (P1-P4) (a 2x2 process topology is created by default in this case): ![Implicit global grid](docs/src/assets/images/implicit_global_grid.png) ## Contents * [Multi-GPU with three functions](#multi-gpu-with-three-functions) * [50-lines Multi-GPU example](#50-lines-multi-gpu-example) * [Straightforward in-situ visualization / monitoring](#straightforward-in-situ-visualization--monitoring) * [Seamless interoperability with MPI.jl](#seamless-interoperability-with-mpijl) * [CUDA-aware/ROCm-aware MPI support](#cuda-awarerocm-aware-mpi-support) * [Module documentation callable from the Julia REPL / IJulia](#module-documentation-callable-from-the-julia-repl--ijulia) * [Dependencies](#dependencies) * [Installation](#installation) * [References](#references) ## Multi-GPU with three functions Only three functions are required to perform halo updates close to hardware limit: - `init_global_grid` - `update_halo!` - `finalize_global_grid` Three additional functions are provided to query Cartesian coordinates with respect to the global computational grid if required: - `x_g` - `y_g` - `z_g` Moreover, the following three functions allow to query the size of the global grid: - `nx_g` - `ny_g` - `nz_g` The following Multi-GPU 3-D heat diffusion solver illustrates how these functions enable the creation of massively parallel applications. ## 50-lines Multi-GPU example This simple Multi-GPU 3-D heat diffusion solver uses ImplicitGlobalGrid. It relies fully on the broadcasting capabilities of [CUDA.jl]'s `CuArray` type to perform the stencil-computations with maximal simplicity ([CUDA.jl] enables also writing explicit GPU kernels which can lead to significantly better performance for these computations). ```julia using CUDA # Import CUDA before ImplicitGlobalGrid to activate its CUDA device support using ImplicitGlobalGrid @views d_xa(A) = A[2:end , : , : ] .- A[1:end-1, : , : ]; @views d_xi(A) = A[2:end ,2:end-1,2:end-1] .- A[1:end-1,2:end-1,2:end-1]; @views d_ya(A) = A[ : ,2:end , : ] .- A[ : ,1:end-1, : ]; @views d_yi(A) = A[2:end-1,2:end ,2:end-1] .- A[2:end-1,1:end-1,2:end-1]; @views d_za(A) = A[ : , : ,2:end ] .- A[ : , : ,1:end-1]; @views d_zi(A) = A[2:end-1,2:end-1,2:end ] .- A[2:end-1,2:end-1,1:end-1]; @views inn(A) = A[2:end-1,2:end-1,2:end-1] @views function diffusion3D() # Physics lam = 1.0; # Thermal conductivity cp_min = 1.0; # Minimal heat capacity lx, ly, lz = 10.0, 10.0, 10.0; # Length of domain in dimensions x, y and z # Numerics nx, ny, nz = 256, 256, 256; # Number of gridpoints dimensions x, y and z nt = 100000; # Number of time steps init_global_grid(nx, ny, nz); # Initialize the implicit global grid dx = lx/(nx_g()-1); # Space step in dimension x dy = ly/(ny_g()-1); # ... in dimension y dz = lz/(nz_g()-1); # ... in dimension z # Array initializations T = CUDA.zeros(Float64, nx, ny, nz ); Cp = CUDA.zeros(Float64, nx, ny, nz ); dTedt = CUDA.zeros(Float64, nx-2, ny-2, nz-2); qx = CUDA.zeros(Float64, nx-1, ny-2, nz-2); qy = CUDA.zeros(Float64, nx-2, ny-1, nz-2); qz = CUDA.zeros(Float64, nx-2, ny-2, nz-1); # Initial conditions (heat capacity and temperature with two Gaussian anomalies each) Cp .= cp_min .+ CuArray([5*exp(-((x_g(ix,dx,Cp)-lx/1.5))^2-((y_g(iy,dy,Cp)-ly/2))^2-((z_g(iz,dz,Cp)-lz/1.5))^2) + 5*exp(-((x_g(ix,dx,Cp)-lx/3.0))^2-((y_g(iy,dy,Cp)-ly/2))^2-((z_g(iz,dz,Cp)-lz/1.5))^2) for ix=1:size(T,1), iy=1:size(T,2), iz=1:size(T,3)]) T .= CuArray([100*exp(-((x_g(ix,dx,T)-lx/2)/2)^2-((y_g(iy,dy,T)-ly/2)/2)^2-((z_g(iz,dz,T)-lz/3.0)/2)^2) + 50*exp(-((x_g(ix,dx,T)-lx/2)/2)^2-((y_g(iy,dy,T)-ly/2)/2)^2-((z_g(iz,dz,T)-lz/1.5)/2)^2) for ix=1:size(T,1), iy=1:size(T,2), iz=1:size(T,3)]) # Time loop dt = min(dx*dx,dy*dy,dz*dz)*cp_min/lam/8.1; # Time step for the 3D Heat diffusion for it = 1:nt qx .= -lam.*d_xi(T)./dx; # Fourier's law of heat conduction: q_x = -λ δT/δx qy .= -lam.*d_yi(T)./dy; # ... q_y = -λ δT/δy qz .= -lam.*d_zi(T)./dz; # ... q_z = -λ δT/δz dTedt .= 1.0./inn(Cp).*(-d_xa(qx)./dx .- d_ya(qy)./dy .- d_za(qz)./dz); # Conservation of energy: δT/δt = 1/cₚ (-δq_x/δx - δq_y/dy - δq_z/dz) T[2:end-1,2:end-1,2:end-1] .= inn(T) .+ dt.*dTedt; # Update of temperature T_new = T_old + δT/δt update_halo!(T); # Update the halo of T end finalize_global_grid(); # Finalize the implicit global grid end diffusion3D() ``` The corresponding file can be found [here](docs/examples/diffusion3D_multigpu_CuArrays_novis.jl). A basic cpu-only example is available [here](docs/examples/diffusion3D_multicpu_novis.jl) (no usage of multi-threading). ## Straightforward in-situ visualization / monitoring ImplicitGlobalGrid provides a function to gather an array from each process into a one large array on a single process, assembled according to the global grid: - `gather!` This enables straightforward in-situ visualization or monitoring of Multi-GPU/CPU applications using e.g. the [Julia Plots package] as shown in the following (the GR backend is used as it is particularly fast according to the [Julia Plots documentation]). It is enough to add a couple of lines to the previous example (omitted unmodified lines are represented with `#(...)`): ```julia using CUDA # Import CUDA before ImplicitGlobalGrid to activate its CUDA device support using ImplicitGlobalGrid, Plots #(...) @views function diffusion3D() # Physics #(...) # Numerics #(...) me, dims = init_global_grid(nx, ny, nz); # Initialize the implicit global grid #(...) # Array initializations #(...) # Initial conditions (heat capacity and temperature with two Gaussian anomalies each) #(...) # Preparation of visualisation gr() ENV["GKSwstype"]="nul" anim = Animation(); nx_v = (nx-2)*dims[1]; ny_v = (ny-2)*dims[2]; nz_v = (nz-2)*dims[3]; T_v = zeros(nx_v, ny_v, nz_v); T_nohalo = zeros(nx-2, ny-2, nz-2); # Time loop #(...) for it = 1:nt if mod(it, 1000) == 1 # Visualize only every 1000th time step T_nohalo .= Array(T[2:end-1,2:end-1,2:end-1]); # Copy data to CPU removing the halo gather!(T_nohalo, T_v) # Gather data on process 0 (could be interpolated/sampled first) if (me==0) heatmap(transpose(T_v[:,ny_v÷2,:]), aspect_ratio=1); frame(anim); end # Visualize it on process 0 end #(...) end # Postprocessing if (me==0) gif(anim, "diffusion3D.gif", fps = 15) end # Create a gif movie on process 0 if (me==0) mp4(anim, "diffusion3D.mp4", fps = 15) end # Create a mp4 movie on process 0 finalize_global_grid(); # Finalize the implicit global grid end diffusion3D() ``` Here is the resulting movie when running the application on 8 GPUs, solving 3-D heat diffusion with heterogeneous heat capacity (two Gaussian anomalies) on a global computational grid of size 510x510x510 grid points. It shows the x-z-dimension plane in the middle of the dimension y: ![Implicit global grid](docs/src/assets/videos/diffusion3D_8gpus.gif) The simulation producing this movie - *including the in-situ visualization* - took 29 minutes on 8 NVIDIA® Tesla® P100 GPUs on Piz Daint (an optimized solution using [CUDA.jl]'s native kernel programming capabilities can be more than 10 times faster). The complete example can be found [here](docs/examples/diffusion3D_multigpu_CuArrays.jl). A corresponding basic cpu-only example is available [here](docs/examples/diffusion3D_multicpu.jl) (no usage of multi-threading) and a movie of a simulation with 254x254x254 grid points which it produced within 34 minutes using 8 Intel® Xeon® E5-2690 v3 is found [here](docs/src/assets/videos/diffusion3D_8cpus.gif) (with 8 processes, no multi-threading). ## Seamless interoperability with MPI.jl ImplicitGlobalGrid is seamlessly interoperable with [MPI.jl]. The Cartesian MPI communicator it uses is created by default when calling `init_global_grid` and can then be obtained as follows (variable `comm_cart`): ```julia me, dims, nprocs, coords, comm_cart = init_global_grid(nx, ny, nz); ``` Moreover, the automatic initialization and finalization of MPI can be deactivated in order to replace them with direct calls to [MPI.jl]: ```julia init_global_grid(nx, ny, nz; init_MPI=false); ``` ```julia finalize_global_grid(;finalize_MPI=false) ``` Besides, `init_global_grid` makes every argument it passes to an [MPI.jl] function customizable via its keyword arguments. ## CUDA-aware/ROCm-aware MPI support If the system supports CUDA-aware/ROCm-aware MPI, it may be activated for ImplicitGlobalGrid by setting an environment variable as specified in the module documentation callable from the [Julia REPL] or in [IJulia] (see next section). ## Module documentation callable from the Julia REPL / IJulia The module documentation can be called from the [Julia REPL] or in [IJulia]: ```julia-repl julia> using ImplicitGlobalGrid julia>? help?> ImplicitGlobalGrid search: ImplicitGlobalGrid Module ImplicitGlobalGrid Renders the distributed parallelization of stencil-based GPU and CPU applications on a regular staggered grid almost trivial and enables close to ideal weak scaling of real-world applications on thousands of GPUs. General overview and examples ≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡ https://github.com/eth-cscs/ImplicitGlobalGrid.jl Functions ≡≡≡≡≡≡≡≡≡≡≡ • init_global_grid • finalize_global_grid • update_halo! • gather! • select_device • nx_g • ny_g • nz_g • x_g • y_g • z_g • tic • toc To see a description of a function type ?<functionname>. │ Activation of device support │ │ The support for a device type (CUDA or AMDGPU) is activated by importing the corresponding module (CUDA or AMDGPU) before │ importing ImplicitGlobalGrid (the corresponding extension will be loaded). │ Performance note │ │ If the system supports CUDA-aware MPI (for Nvidia GPUs) or ROCm-aware MPI (for AMD GPUs), it may be activated for │ ImplicitGlobalGrid by setting one of the following environment variables (at latest before the call to init_global_grid): │ │ shell> export IGG_CUDAAWARE_MPI=1 │ │ shell> export IGG_ROCMAWARE_MPI=1 julia> ``` ## Dependencies ImplicitGlobalGrid relies on the Julia MPI wrapper ([MPI.jl]), the Julia CUDA package ([CUDA.jl] \[[5][Julia CUDA paper 1], [6][Julia CUDA paper 2]\]) and the Julia AMDGPU package ([AMDGPU.jl]). ## Installation ImplicitGlobalGrid may be installed directly with the [Julia package manager](https://docs.julialang.org/en/v1/stdlib/Pkg/index.html) from the REPL: ```julia-repl julia>] pkg> add ImplicitGlobalGrid pkg> test ImplicitGlobalGrid ``` ## References \[1\] [Räss, L., Omlin, S., & Podladchikov, Y. Y. (2019). Porting a Massively Parallel Multi-GPU Application to Julia: a 3-D Nonlinear Multi-Physics Flow Solver. JuliaCon Conference, Baltimore, USA.][JuliaCon19] \[2\] [Räss, L., Omlin, S., & Podladchikov, Y. Y. (2019). A Nonlinear Multi-Physics 3-D Solver: From CUDA C + MPI to Julia. PASC19 Conference, Zurich, Switzerland.][PASC19] \[3\] [Omlin, S., Räss, L., Kwasniewski, G., Malvoisin, B., & Podladchikov, Y. Y. (2020). Solving Nonlinear Multi-Physics on GPU Supercomputers with Julia. JuliaCon Conference, virtual.][JuliaCon20a] \[4\] [Räss, L., Omlin, S., & Podladchikov, Y. Y. (2019). Resolving Spontaneous Nonlinear Multi-Physics Flow Localisation in 3-D: Tackling Hardware Limit. GPU Technology Conference 2019, San Jose, Silicon Valley, CA, USA.][GTC19] \[5\] [Besard, T., Foket, C., & De Sutter, B. (2018). Effective Extensible Programming: Unleashing Julia on GPUs. IEEE Transactions on Parallel and Distributed Systems, 30(4), 827-841. doi: 10.1109/TPDS.2018.2872064][Julia CUDA paper 1] \[6\] [Besard, T., Churavy, V., Edelman, A., & De Sutter B. (2019). Rapid software prototyping for heterogeneous and distributed platforms. Advances in Engineering Software, 132, 29-46. doi: 10.1016/j.advengsoft.2019.02.002][Julia CUDA paper 2] [JuliaCon20a]: https://www.youtube.com/watch?v=vPsfZUqI4_0 [JuliaCon19]: https://pretalx.com/juliacon2019/talk/LGHLC3/ [PASC19]: https://pasc19.pasc-conference.org/program/schedule/presentation/?id=msa218&sess=sess144 [GTC19]: https://on-demand.gputechconf.com/gtc/2019/video/_/S9368/ [MPI.jl]: https://github.com/JuliaParallel/MPI.jl [CUDA.jl]: https://github.com/JuliaGPU/CUDA.jl [AMDGPU.jl]: https://github.com/JuliaGPU/AMDGPU.jl [Julia Plots package]: https://github.com/JuliaPlots/Plots.jl [Julia Plots documentation]: http://docs.juliaplots.org/latest/backends/ [Julia CUDA paper 1]: https://doi.org/10.1109/TPDS.2018.2872064 [Julia CUDA paper 2]: https://doi.org/10.1016/j.advengsoft.2019.02.002 [Julia REPL]: https://docs.julialang.org/en/v1/stdlib/REPL/ [IJulia]: https://github.com/JuliaLang/IJulia.jl
ImplicitGlobalGrid
https://github.com/eth-cscs/ImplicitGlobalGrid.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
962
module Coequalizer # using Revise using Test using Catlab.CategoricalAlgebra using CombinatorialEnumeration include(joinpath(@__DIR__, "equalizer.jl")) S = dual(Equalizer.S, :Coequalizer, [:E=>:C, :e=>:c]) function runtests() I = @acset S.cset begin A=2;B=2 end es = init_premodel(S,I, [:A,:B]) chase_db(S,es) expected =[ # f,g both const and point to same element @acset(S.cset,begin A=2;B=2;C=2;f=1;g=1;c=[1,2] end), # f,g both const and point to different elements @acset(S.cset,begin A=2;B=2;C=1;f=1;g=2;c=1 end), # g const, f is not @acset(S.cset,begin A=2;B=2;C=1;f=[1,2];g=1;c=1 end), # f const, g is not @acset(S.cset,begin A=2;B=2;C=1;g=[1,2];f=1;c=1 end), # f and g are id @acset(S.cset,begin A=2;B=2;C=2;f=[1,2];g=[1,2];c=[1,2] end), # # f and g are swapped @acset(S.cset,begin A=2;B=2;C=1;f=[1,2];g=[2,1];c=1 end), ] @test test_models(es, S, expected) return true end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
708
module Const using Catlab.CategoricalAlgebra using CombinatorialEnumeration using Test """ CONSTANTS Models are two constants from a set. A constant is an arrow from 1, the set with one element. """ constschema = @acset LabeledGraph begin V = 2; E = 2; vlabel = [:I, :A]; elabel = [:f, :g]; src = 1; tgt = 2 end S = Sketch(:const, constschema, cones=[Cone(:I)]) function runtests() I = @acset S.cset begin A=3 end es = init_premodel(S,I, [:A]) chase_db(S,es) expected = [ # f and g are the same @acset(S.cset,begin A=3;I=1;f=1;g=1 end), # f and g are different @acset(S.cset,begin A=3;I=1;f=1;g=2 end), ] @test test_models(es, S, expected) return true end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
872
module Coproduct # using Revise using Test using Catlab.CategoricalAlgebra using CombinatorialEnumeration """ Using the surjection encoding, this is a sketch for a pair of maps that are *jointly surjective*. """ schema = @acset LabeledGraph begin V=2; E=2; vlabel=[:A,:A_A]; elabel=[:iA,:iB]; src=1; tgt=2 end """A_A is the coproduct A+B""" a_a = Cone(@acset(LabeledGraph, begin V=2; vlabel=[:A,:A] end), :A_A, [1=>:iA,2=>:iB]) S = Sketch(:Coprod, schema, cocones=[a_a,]) function runtests() I = @acset S.cset begin A=3 end es = init_premodel(S,I, [:A]) expected = @acset S.cset begin A=3;A_A=6; iA=[1,2,3];iB=[4,5,6] end chase_db(S,es) @test test_models(es, S, [expected]) I = @acset S.cset begin A=3; A_A=6 end es = init_premodel(S,I, [:A,:A_A]) chase_db(S,es) @test test_models(es, S, [expected]) return true end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
1206
module Equalizer # using Revise using Test using Catlab.CategoricalAlgebra using CombinatorialEnumeration eqschema = @acset LabeledGraph begin V = 3; E = 3 vlabel = [:A,:B,:E]; elabel = [:f,:g, :e] src = [1,1,3]; tgt = [2,2,1] end eqconed = @acset LabeledGraph begin V=3; E=2; vlabel=[:A,:A,:B]; elabel=[:f,:g]; src=[1,2]; tgt=[3,3] end S = Sketch(:Equalizer, eqschema, cones=[Cone(eqconed, :E, [1=>:e,2=>:e])]); function runtests() I = @acset S.cset begin A=2;B=2 end es = init_premodel(S,I, [:A,:B]) chase_db(S,es) ms = [get_model(es,S,i) for i in es.models] expected =[ # f,g both const and point to same element @acset(S.cset,begin A=2;B=2;E=2;f=1;g=1;e=[1,2] end), # f,g both const and point to different elements @acset(S.cset,begin A=2;B=2;f=1;g=2 end), # g const, f is not @acset(S.cset,begin A=2;B=2;E=1;f=[1,2];g=1;e=1 end), # f const, g is not @acset(S.cset,begin A=2;B=2;E=1;g=[1,2];f=1;e=1 end), # f and g are id @acset(S.cset,begin A=2;B=2;E=2;f=[1,2];g=[1,2];e=[1,2] end), # f and g are swapped @acset(S.cset,begin A=2;B=2;f=[1,2];g=[2,1] end), ] @test test_models(es, S, expected) return true end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
6015
module GraphOverlap # using Revise using Test using Catlab.CategoricalAlgebra, Catlab.Graphics, Catlab.Present, Catlab.Graphs using CombinatorialEnumeration using CombinatorialEnumeration.Models: is_surjective using DataStructures using CSetAutomorphisms import CombinatorialEnumeration const LG = CombinatorialEnumeration.LabeledGraph """ Using the surjection encoding, this is a sketch for a two pairs of maps that are each *jointly surjective*. These correspond to the vertex and edge maps of graph homomorphisms from G₁ and G₂ into G₃. V₁ π₁ ↙ ↘ fᵥ PB ⇉ V₁+V₂ ↠ V₃ π₂ ↖ ↗ gᵥ V₂ We furthermore need to specify the homomorphism condition (which relates fᵥ to fₑ, e.g.) and the graph data (which relates V₁ to E₁, e.g.) """ schema = @acset LG begin V=10; E=20; vlabel=[:V₁,:V₂,:V₃,:V₁_V₂,:PBᵥ,:E₁,:E₂,:E₃,:E₁_E₂,:PBₑ]; elabel=[:fᵥ,:gᵥ,:iᵥ₁,:iᵥ₂,:pᵥ₁,:pᵥ₂,:fᵥ_gᵥ, :fₑ,:gₑ,:iₑ₁,:iₑ₂,:pₑ₁,:pₑ₂,:fₑ_gₑ, :s₁,:t₁,:s₂,:t₂,:s₃,:t₃]; src=[1,2,1,2,5,5,4, 6,7,6,7,10,10,9, 6,6,7,7,8,8]; tgt=[3,3,4,4,4,4,3, 8,8,9,9,9, 9, 8, 1,1,2,2,3,3] end @present SchRes(FreeSchema) begin (V₁,V₂,V₃,E₁,E₂,E₃)::Ob fᵥ::Hom(V₁,V₃); gᵥ::Hom(V₂,V₃) fₑ::Hom(E₁,E₃); gₑ::Hom(E₂,E₃) (s₁,t₁)::Hom(E₁,V₁) (s₂,t₂)::Hom(E₂,V₂) (s₃,t₃)::Hom(E₃,V₃) end @acset_type R(SchRes) # View the schema here # to_graphviz(schema;node_labels=:vlabel,edge_labels=:elabel) """PB is a pullback: all pairs of A+B that agree on their value in c""" cs = map([:V=>:ᵥ,:E=>:ₑ]) do (x,y) vlabel = Symbol.([fill("$(x)₁_$(x)₂",2)...,"$(x)₃"]) elabel = Symbol.(fill("f$(y)_g$y" ,2)) lgs = [1=>Symbol("p$(y)₁"),2=>Symbol("p$(y)₂")] g = @acset(LG, begin V=3;E=2; vlabel=vlabel; elabel=elabel; src=[1,2]; tgt=3 end,) Cone(g, Symbol("PB$y"), lgs) end """(C,c) is the coequalizer of PB's legs""" ccs = map([:V=>:ᵥ,:E=>:ₑ]) do (x,y) vlabel = Symbol.(["PB$y",fill("$(x)₁_$(x)₂", 2)...]) elabel = Symbol.(["p$(y)₁", "p$(y)₂"]) lgs = [i=>Symbol("f$(y)_g$y") for i in [2,3]] g = @acset(LG, begin V=3;E=2;vlabel=vlabel; elabel=elabel; src=1; tgt=2 end) Cone(g, Symbol("$(x)₃"), lgs) end """A_B is the coproduct A+B""" a_bs = map([:V=>:ᵥ,:E=>:ₑ]) do (x,y) vlabel = Symbol.(["$(x)₁", "$(x)₂"]) ap = Symbol("$(x)₁_$(x)₂") lgs = [1=>Symbol("i$(y)₁"),2=>Symbol("i$(y)₂")] Cone(@acset(LG, begin V=2;vlabel=vlabel end), ap, lgs) end """Make a morphism injective""" mk_inj(s,t,f) = Cone(@acset(LG, begin V=3;E=2;vlabel=[s,s,t]; elabel=[f,f];src=[1,2]; tgt=3 end,), s, [1=>add_id(s),2=>add_id(s)]) injs = [mk_inj(x...) for x in [(:V₁,:V₃,:fᵥ),(:V₂,:V₃,:gᵥ),(:E₁,:E₃,:fₑ),(:E₂,:E₃,:gₑ)]] # Equations for the consistency of maps out of the coproduct objects ve_eqs = vcat(map([:ᵥ,:ₑ]) do y c = "f$(y)_g$y" (m->(n->Symbol.(n)).(m)).([[["f$y"],["i$(y)₁",c]],[["g$y"],["i$(y)₂",c]]]) end...) # Equations for the homomorphism constraints hom_eqs = vcat(map([:f => Symbol("₁"), :g => Symbol("₂")]) do (f,i) map([:s,:t]) do st (m->Symbol.(m)).([["$(f)ₑ","$(st)₃"],["$st$i","$(f)ᵥ"],]) end end...) eqs = vcat(ve_eqs, hom_eqs) S = Sketch(:Overlap, schema, cones=[cs...,injs...], cocones=[ccs...,a_bs...], eqs=eqs) # Example of 3 path equations starting from E₁ to_graphviz(S.eqs[:E₁]; node_labels=:vlabel, edge_labels=:elabel) function init_graphs(g1::Graph, g2::Graph,vg3=0,eg3=0) @acset S.cset begin V₁=nv(g1); V₂=nv(g2);E₁=ne(g1);E₂=ne(g2);V₃=vg3;E₃=eg3 s₁=g1[:src];t₁=g1[:tgt];s₂=g2[:src];t₂=g2[:tgt] end end function parse_graph(X::StructACSet, i::Symbol) @acset Graph begin V=nparts(X,Symbol("V$i")); E=nparts(X,Symbol("E$i")) src=X[Symbol("s$i")];tgt=X[Symbol("t$i")]; end end function parse_map(X::StructACSet, i::Symbol) fv, fe = [Symbol("$(string(i)=="₁" ? "f" : "g" )$p") for p in [:ᵥ,:ₑ]] m = ACSetTransformation(parse_graph(X,i), parse_graph(X,Symbol("₃")); V=X[fv], E=X[fe]) is_natural(m) || error("unnatural $(dom(m))\n$(codom(m))\n$(components(m))") m end """Parse maps and confirm it is jointly surjective""" function parse_graphoverlap(X::StructACSet) f, g = [parse_map(X,Symbol(i)) for i in ["₁","₂"]] for P in [:V,:E] for v in parts(codom(f), P) v ∈ collect(f[P]) ∪ collect(g[P]) || error("$P#$v not in image(f+g)") end end return (codom(f),f,g) end parse_result(X::StructACSet,Y::StructACSet{S}) where S = begin copy_parts!(Y,X,ob(S)); return Y end parse_result(X::StructACSet) = parse_result(X,R()) function runtests() pg = path_graph(Graph,2) I = init_graphs(pg,pg) es = init_premodel(S,I, [:V₁, :V₂, :E₁,:E₂]) chase_db(S,es); expected = [ # arrows pointing opposite between same vertices @acset(R, begin V₁=2;V₂=2;E₁=1;E₂=1;s₁=1;t₁=2;s₂=1;t₂=2 V₃=2;E₃=2;s₃=[1,2];t₃=[2,1]; fᵥ=[1,2];gᵥ=[2,1];fₑ=1;gₑ=2 end), # arrows pointing in parallel between same vertices @acset(R, begin V₁=2;V₂=2;E₁=1;E₂=1;s₁=1;t₁=2;s₂=1;t₂=2 V₃=2;E₃=2;s₃=1;t₃=2; fᵥ=[1,2];gᵥ=[1,2];fₑ=1;gₑ=2 end), # no overlap @acset(R, begin V₁=2;V₂=2;E₁=1;E₂=1;s₁=1;t₁=2;s₂=1;t₂=2 V₃=4;E₃=2;s₃=[1,3];t₃=[2,4]; fᵥ=[1,2];gᵥ=[3,4];fₑ=1;gₑ=2 end), # total overlap @acset(R, begin V₁=2;V₂=2;E₁=1;E₂=1;s₁=1;t₁=2;s₂=1;t₂=2 V₃=2;E₃=1;s₃=1;t₃=2; fᵥ=[1,2];gᵥ=[1,2];fₑ=1;gₑ=1 end), # overlap a1 b1 @acset(R, begin V₁=2;V₂=2;E₁=1;E₂=1;s₁=1;t₁=2;s₂=1;t₂=2 V₃=3;E₃=2;s₃=1;t₃=[2,3]; fᵥ=[1,2];gᵥ=[1,3];fₑ=1;gₑ=2 end), # overlap a2 b1 @acset(R, begin V₁=2;V₂=2;E₁=1;E₂=1;s₁=1;t₁=2;s₂=1;t₂=2 V₃=3;E₃=2;s₃=[1,2];t₃=[2,3]; fᵥ=[1,2];gᵥ=[2,3];fₑ=1;gₑ=2 end), # overlap a1 b2 @acset(R, begin V₁=2;V₂=2;E₁=1;E₂=1;s₁=1;t₁=2;s₂=1;t₂=2 V₃=3;E₃=2;s₃=[1,2];t₃=[2,3]; fᵥ=[2,3];gᵥ=[1,2];fₑ=2;gₑ=1 end), # overlap a2 b2 @acset(R, begin V₁=2;V₂=2;E₁=1;E₂=1;s₁=1;t₁=2;s₂=1;t₂=2 V₃=3;E₃=2;s₃=[1,3];t₃=2; fᵥ=[1,2];gᵥ=[3,2];fₑ=1;gₑ=2 end), ]; @test test_models(es, S, expected; f=parse_result) return true end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
966
module Inj # using Revise using Test using Catlab.CategoricalAlgebra using CombinatorialEnumeration """ Encoding of a injection as a limit cone with id legs Barr and Wells CTCS 10.4.6 """ schema = @acset LabeledGraph begin V=2; E=1; vlabel=[:A,:B]; elabel=[:f]; src=[1]; tgt=[2] end """ id f ----> A ↘ Apex A B ----> A ↗ id f """ c = Cone(@acset(LabeledGraph, begin V=3;E=2;vlabel=[:A,:A,:B]; elabel=[:f,:f];src=[1,2]; tgt=3 end,), :A, [1=>:id_A,2=>:id_A]) S = Sketch(:Inj, schema, cones=[c]) function runtests() I = @acset S.cset begin A=2;B=1 end # not possible to have surj es = init_premodel(S,I,[:A,:B]) chase_db(S,es) @test test_models(es, S, []) I = @acset S.cset begin A=2; B=2 end es = init_premodel(S,I,[:A,:B]) chase_db(S,es) expected = @acset S.cset begin A=2;B=2; f=[1,2] end @test test_models(es, S, [expected]) return true end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
2152
module JointSurj # using Revise using Test using Catlab.CategoricalAlgebra using CombinatorialEnumeration """ Using the surjection encoding, this is a sketch for a pair of maps that are *jointly surjective*. A π₁ ↙ ↘ f PB ⇉ A+B ↠ C π₂ ↖ ↗ g B """ schema = @acset LabeledGraph begin V=5; E=7; vlabel=[:A,:B,:C,:A_B,:PB]; elabel=[:f,:g,:iA,:iB,:p1,:p2,:c]; src=[1,2,1,2,5,5,4]; tgt=[3,3,4,4,4,4,3] end """PB is a pullback: all pairs of A+B that agree on their value in c""" c = Cone(@acset(LabeledGraph, begin V=3;E=2;vlabel=[:A_B,:A_B,:C]; elabel=[:c,:c];src=[1,2]; tgt=3 end,), :PB, [1=>:p1,2=>:p2]) """(C,c) is the coequalizer of PB's legs""" cc = Cone(@acset(LabeledGraph, begin V=3;E=2;vlabel=[:PB,:A_B,:A_B]; elabel=[:p1, :p2]; src=1; tgt=2 end), :C, [2=>:c, 3=>:c]) """A_B is the coproduct A+B""" a_b = Cone(@acset(LabeledGraph, begin V=2;vlabel=[:A,:B] end), :A_B, [1=>:iA,2=>:iB]) eqs = [[[:f],[:iA,:c]],[[:g],[:iB,:c]]] S = Sketch(:JointSurj, schema, cones=[c], cocones=[cc,a_b,],eqs=eqs) function runtests() I = @acset S.cset begin A=2;B=2;C=2 end es = init_premodel(S,I, [:A,:B,:C]) chase_db(S,es) expected = [ @acset(S.cset, begin A=2;B=2;C=2;A_B=4;iA=[1,2];iB=[3,4];c=[1,1,2,2]; f=1;g=2;PB=8;p1=[1,1,2,2,3,3,4,4];p2=[1,2,1,2,3,4,3,4] end), @acset(S.cset, begin A=2;B=2;C=2;A_B=4;iA=[1,2];iB=[3,4];c=[1,2,1,2]; f=[1,2];g=[1,2];PB=8;p1=[1,1,2,2,3,3,4,4];p2=[1,3,2,4,1,3,2,4] end), @acset(S.cset, begin A=2;B=2;C=2;A_B=4;iA=[1,2];iB=[3,4];c=[1,2,2,2]; f=[1,2];g=2;PB=10; p1=[1,2,2,2,3,3,3,4,4,4];p2=[1,2,3,4,2,3,4,2,3,4] end), @acset(S.cset, begin A=2;B=2;C=2;A_B=4;iA=[1,2];iB=[3,4];c=[1,1,1,2]; f=1;g=[1,2];PB=10; p1=[1,1,1,2,2,2,3,3,3,4];p2=[1,2,3,1,2,3,1,2,3,4] end), ] @test test_models(es, S, expected) # we can also, knowing what A and B are, freeze A+B. I = @acset S.cset begin A=2;B=2;C=2;A_B=4 end es = init_premodel(S,I, [:A,:B,:C,:A_B]) chase_db(S,es) @test test_models(es, S, expected) return true end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
2208
module LeftInvInvolution # using Revise using Test using Catlab.CategoricalAlgebra using CombinatorialEnumeration """ LEFT INVERSE / INVOLUTION Models are pairs of involutions inv:B -> B AND monomorphisms f:A->B (with right inverse g: B->A) This is just a simple illustrative example of sketches. TODO we will use it to show how this sketch is the pushout of two smaller sketches and how we can compute the models compositionally. """ ########## # Sketch # ########## fgschema = @acset LabeledGraph begin V = 2 E = 3 vlabel = [:A,:B] elabel = [:f,:g, :inv] src = [1, 2, 2] tgt = [2, 1, 2] end S = Sketch(:FG, fgschema; eqs=[ [[:f, :g], Symbol[]], [[:inv,:inv],Symbol[]]]) ######### # Tests # ######### function runtests() I = @acset S.cset begin A=1;B=2 end es = init_premodel(S,I,[:A,:B]) chase_db(S,es) expected = [@acset(S.cset, begin A=1;B=2;f=1;g=1;inv=[1,2]end), # id inv @acset(S.cset, begin A=1;B=2;f=1;g=1;inv=[2,1]end)] # swap inv @test test_models(es, S, expected) I = @acset S.cset begin A=2;B=2 end es = init_premodel(S,I,[:A,:B]) chase_db(S,es) expected = [@acset(S.cset, begin A=2;B=2;f=[1,2];g=[1,2];inv=[1,2]end), # id @acset(S.cset, begin A=2;B=2;f=[1,2];g=[1,2];inv=[2,1]end)] # swap @test test_models(es, S, expected) I = @acset S.cset begin A=2;B=1 end es = init_premodel(S,I,[:A,:B]) chase_db(S,es) @test test_models(es, S, []) # no left inv possible for f I = @acset S.cset begin A=2;B=3 end es = init_premodel(S,I,[:A,:B]) chase_db(S,es) # think of A as picking out a subset of B, f(A). let the excluded element be bₓ expected = [ # inv is id @acset(S.cset, begin A=2;B=3;f=[1,2];g=[1,2,1];inv=[1,2,3]end), # inv swaps f(A). @acset(S.cset, begin A=2;B=3;f=[1,2];g=[1,2,1];inv=[2,1,3]end), # inv swaps one element in f(A) with bₓ. g(bₓ) maps to the unswapped element. @acset(S.cset, begin A=2;B=3;f=[1,2];g=[1,2,1];inv=[1,3,2]end), # inv swaps one element in f(A) with bₓ. g(bₓ) maps to the swapped element. @acset(S.cset, begin A=2;B=3;f=[1,2];g=[1,2,2];inv=[1,3,2]end) ] @test test_models(es, S, expected) return true end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
742
module Pairs using Catlab.CategoricalAlgebra using CombinatorialEnumeration using Test """ Cartesian Product """ pairschema = @acset LabeledGraph begin V=2; E=2; vlabel=[:s, :s2]; elabel=[:p1, :p2]; src=2; tgt=1 end S = Sketch(:pairs, pairschema, cones=[Cone(@acset(LabeledGraph, begin V=2;vlabel = [:s, :s] end), :s2, [1=>:p1,2=>:p2])]) function runtests() I = @acset S.cset begin s=2 end es = init_premodel(S,I) chase_db(S,es) ex = @acset S.cset begin s=2; s2=4; p1=[1,2,1,2]; p2=[1,1,2,2] end @test test_models(es, S, [ex]) I = @acset S.cset begin s=3 end es = init_premodel(S,I) chase_db(S,es) mo = get_model(es,S,last(sort(collect(es.models)))) @test nparts(mo, :s2) == 9 return true end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
755
module Perm # using Revise using Catlab.CategoricalAlgebra using CombinatorialEnumeration using Test """ Permutations of a set, i.e. invertible endo-functions. """ permschema = @acset LabeledGraph begin V = 1 E = 2 vlabel = [:X] elabel = [:f, :f⁻¹] src = [1,1] tgt = [1,1] end S = Sketch(:perm, permschema, eqs=[[[:f, :f⁻¹],Symbol[]]]) function runtests() I = @acset S.cset begin X=3 end es = init_premodel(S,I, [:X]) chase_db(S,es) expected = [ @acset(S.cset, begin X=3;f=[1,2,3];f⁻¹=[1,2,3] end), @acset(S.cset, begin X=3;f=[1,3,2];f⁻¹=[1,3,2] end), @acset(S.cset, begin X=3;f=[2,3,1];f⁻¹=[3,1,2] end), ] @test test_models(es, S, expected) return true end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
1300
module Petri # using Revise using Test using Catlab.CategoricalAlgebra using CombinatorialEnumeration using CSetAutomorphisms ########## # Sketch # ########## petschema = @acset LabeledGraph begin V=4; E=4; vlabel=[:S,:T,:I,:O]; elabel=[:is,:it,:os,:ot]; src= [3,3,4,4]; tgt= [1,2,1,2] end S = Sketch(:Petr, petschema) ######### # Tests # ######### """ Create all petri nets with i S/T/I/O brute force. """ function all_petri(i::Int) i < 3 || error("don't try with large i like $i") res = Dict() I = @acset S.cset begin S=i; T=i; I=i; O=i end for os in Iterators.product([1:i for _ in 1:i]...) set_subpart!(I, :os, collect(os)) for it in Iterators.product([1:i for _ in 1:i]...) set_subpart!(I, :it, collect(it)) for si in Iterators.product([1:i for _ in 1:i]...) set_subpart!(I, :is, collect(si)) for to in Iterators.product([1:i for _ in 1:i]...) set_subpart!(I, :ot, collect(to)) cN = call_nauty(I) res[cN.hsh] = cN.cset end end end end return collect(values(res)) end function runtests() I = @acset S.cset begin S=2;T=2;I=2;O=2 end; es = init_premodel(S,I,[:S,:T,:I,:O]); chase_db(S,es) expected = all_petri(2); @test test_models(es, S, expected) return true end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
901
module ReflGraph # using Revise using Test using Catlab.CategoricalAlgebra using CombinatorialEnumeration """# REFLEXIVE GRAPHS # """ schema = @acset LabeledGraph begin V = 2; E = 3 vlabel = [:V, :E]; elabel = [:refl, :src, :tgt] src = [1, 2, 2] tgt = [2, 1, 1] end S = Sketch(:reflgraph, schema, eqs=[[[:refl, :src], Symbol[]], [[:refl, :tgt], Symbol[]]]) function runtests() I = @acset S.cset begin V=2; E=3 end es = init_premodel(S,I, [:V,:E]) chase_db(S,es) expected = [ @acset(S.cset, begin V=2;E=3;refl=[1,2];src=[1,2,1];tgt=[1,2,1] end), @acset(S.cset, begin V=2;E=3;refl=[1,2];src=[1,2,1];tgt=[1,2,2] end), ] @test test_models(es, S, expected) I = @acset S.cset begin V=2; E=1 end es = init_premodel(S,I, [:V,:E]) chase_db(S,es) @test test_models(es, S, []) return true end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
4902
module Semigroup # using Revise using Test using Catlab.CategoricalAlgebra using CombinatorialEnumeration using CSetAutomorphisms """ Semigroups. An associative binary operation. https://research-repository.st-andrews.ac.uk/handle/10023/945 Table 4.1 https://oeis.org/A027851: should be 1 5 24 188 1915 n | 1 | 2 | 3 | 4 | 5 # semi | 1 | 8 | 113 | ? | 183 732 | 17 061 118 # semi (iso)| 1 | 5 | 24 | 184 | 1915 | # seen | 1 | 12 | """ p1p2, p2p3, idk, kid = map(Symbol, ["π₁×π₂","π₂×π₃","id×k","k×id"]) semig_schema = @acset LabeledGraph begin V = 3; E = 10; vlabel = [:s, :s2, :s3] elabel = [:k, :π₁, :π₂, :Π₁, :Π₂, :Π₃, p1p2, p2p3, idk, kid] src = [2, 2, 2, 3, 3, 3, 3, 3, 3, 3] tgt = [1, 1, 1, 1, 1, 1, 2, 2, 2, 2] end n_cset(i) = prod([(i^2 * i), (i^3*i^2)^4]) # s2 is pair paircone = Cone(@acset(LabeledGraph, begin V = 2; vlabel = [:s, :s] end), :s2, [1=>:π₁, 2=>:π₂]) # s3 is triple tripcone = Cone(@acset(LabeledGraph, begin V = 3; vlabel = [:s, :s, :s] end), :s3, [1=>:Π₁, 2=>:Π₂, 3=>:Π₃]) semieqs = [ [[p1p2, :π₁], [:Π₁]], # p1p2_p1 [[p1p2, :π₂], [:Π₂]], #p1p2_p2 [[p2p3, :π₁], [:Π₂]], # p2p3_p1 [[p2p3, :π₂], [:Π₃]], #p2p3_p2 [[kid, :π₁], [p1p2,:k]], # kid_p1 [[kid, :π₂], [:Π₃]], # kid_p2 [[idk, :π₂], [p2p3,:k]], # idk_p2 [[idk, :π₁], [:Π₁]], # idk_p1 [[idk, :k], [kid,:k]], # assoc ] S = Sketch(:semig, semig_schema, cones=[paircone, tripcone], eqs=semieqs); function binfuns(i::Int)::Vector{Matrix{Int}} res = Matrix{Int}[] for x in Iterators.product([1:i for _ in 1:i^2]...) mat = reshape(collect(x), i, i) if isnothing(test_assoc(mat)) push!(res, mat); end end res end """Find all possible extensions of an associative binfun bh 1 elem""" function binfuns_rec(prev::Vector{Matrix{Int}}) n, _ = size(prev[1]) res = Matrix{Int}[] for p in prev # Need to consider existing i,j maps to the new elem for msk in Iterators.product([[false,true] for _ in 1:n^2]...) msk_ = reshape(collect(msk), n, n) newp = deepcopy(p) newp[msk_] .= (n+1) # Consider the products with the new element for x in Iterators.product([1:n+1 for _ in 1:(2*n+1)]...) x = vec(collect(x)) newmat = hcat(vcat(newp, x[1:n]'),x[n+1:end]) if isnothing(test_assoc(newmat)) # ta = test_assoc(newmat) # isnothing(ta) || error("assoc last bad $newmat \n$ta") push!(res, newmat); print(" $(length(res))") end end end end return res end """Doesn't check if it's actually associative""" function from_matrix(m::Matrix{Int64})::StructACSet n, n_ = size(m) n == n_ || error("Need square matrix") k = vec(m) p1_p2 = vec(collect(Iterators.product(collect(1:n),collect(1:n)))) p1p2d = Dict([v=>k for (k,v) in enumerate(p1_p2)]) p1, p2, p3, p1p2_, p2p3_, idk_, kid_ = [Int[] for _ in 1:7] for (a,b,c) in Iterators.product(collect(1:n),collect(1:n),collect(1:n)) push!(p1, a); push!(p2, b); push!(p3, c) push!(p1p2_, p1p2d[(a,b)]); push!(p2p3_, p1p2d[(b,c)]) push!(idk_, p1p2d[(a, k[p1p2d[(b,c)]])]) push!(kid_, p1p2d[(k[p1p2d[(a,b)]], c)]) end n2, n3 = length(m), n^3 I = S.cset() add_parts!(I, :s, n) add_parts!(I, :s2, n2; π₁=first.(p1_p2), π₂=last.(p1_p2), k=k) add_parts!(I, :s3, n3; Π₁=p1,Π₂=p2, Π₃=p3, Dict([p1p2=>p1p2_, p2p3=>p2p3_, idk=>idk_, kid=>kid_])...) return I end """Tests associativity of multiplications involving LAST element""" function test_assoc_last(m::Matrix{Int})::Bool n,_ = size(m) if m[n,m[n,n]] != m[m[n,n],n] return false end for i in 1:n-1 if m[i,m[n,n]] != m[m[i,n],n] return false elseif m[n,m[n,i]] != m[m[n,n],i] return false elseif m[n,m[i,n]] != m[m[n,i],n] return false end end for (i,j) in Iterators.product(1:n-1,1:n-1) if m[i,m[j,n]] != m[m[i,j],n] return false elseif m[i,m[n,j]] != m[m[i,n],j] return false elseif m[n,m[i,j]] != m[m[n,i],j] return false end end return true end """Tests associativity""" function test_assoc(m::Matrix{Int})::Union{Nothing, Tuple{Int,Int,Int}} n,_ = size(m) for (i,j,k) in Iterators.product(1:n,1:n,1:n) if m[i,m[j,k]] != m[m[i,j],k] return (i,j,k) end end return nothing end function to_matrix(X::StructACSet)::Matrix{Int} m = zeros(Int, (nparts(X,:s),nparts(X,:s))) for (k, p1, p2) in zip(X[:k],X[:π₁],X[:π₂]) m[p1,p2] = k end m end # """Naive filter strategy to get semigroups""" get_semis(i::Int) = collect(Set(values(Dict(map(from_matrix.(binfuns(i))) do m call_nauty(m).hsh => m end)))) function runtests() I = @acset S.cset begin s=2 end; es = init_premodel(S,I, [:s]); chase_db(S,es) @test test_models(es,S,get_semis(2)) return true end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
1189
module Surj # using Revise using Test using Catlab.CategoricalAlgebra using CombinatorialEnumeration """ Encoding of a surjection as a pair cone and cocone as described in Barr and Wells CTCS 10.4.6 d₀ d C ⇉ A ⟶ B d₁ """ schema = @acset LabeledGraph begin V=3; E=3; vlabel=[:C,:A,:B]; elabel=[:d0,:d1,:d]; src=[1,1,2]; tgt=[2,2,3] end """c is a pullback: all pairs of a that agree on their value in d""" c = Cone(@acset(LabeledGraph, begin V=3;E=2;vlabel=[:A,:A,:B]; elabel=[:d,:d];src=[1,2]; tgt=3 end,), :C, [1=>:d0,2=>:d1]) """b is the coequalizer of c's legs""" cc = Cone(@acset(LabeledGraph, begin V=2;E=2;vlabel=[:C,:A]; elabel=[:d0, :d1]; src=1; tgt=2 end), :B, [2=>:d]) S = Sketch(:Surj, schema, cones=[c], cocones=[cc]) function runtests() I = @acset S.cset begin A=1;B=2 end # not possible to have surj es = init_premodel(S,I,[:A,:B]) chase_db(S,es) @test test_models(es, S, []) I = @acset S.cset begin A=3; B=2 end es = init_premodel(S,I,[:A,:B]) chase_db(S,es) expected = @acset S.cset begin A=3;B=2;C=5; d=[1,1,2];d0=[1,1,2,2,3];d1=[1,2,1,2,3] end @test test_models(es, S, [expected]) return true end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
667
module Trips using Catlab.CategoricalAlgebra using CombinatorialEnumeration using Test """ 3-ary Cartesian product """ tripschema = @acset LabeledGraph begin V = 2; E = 3; vlabel = [:s, :s3]; elabel = [:p1, :p2, :p3]; src = 2; tgt = 1 end td = @acset LabeledGraph begin V = 3; vlabel = [:s, :s, :s,] end S = Sketch(:trips, tripschema, cones=[Cone(td, :s3, [1=>:p1,2=>:p2, 3=>:p3])]) function runtests() I = @acset S.cset begin s=2 end es = init_premodel(S,I) chase_db(S,es) ex = @acset S.cset begin s=2; s3=8; p1=[1,2,1,2,1,2,1,2]; p2=[1,1,2,2,1,1,2,2]; p3=[1,1,1,1,2,2,2,2] end @test test_models(es, S, [ex]) return true end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
747
using Documenter using CombinatorialEnumeration # Set Literate.jl config if not being compiled on recognized service. config = Dict{String,String}() if !(haskey(ENV, "GITHUB_ACTIONS") || haskey(ENV, "GITLAB_CI")) config["nbviewer_root_url"] = "https://nbviewer.jupyter.org/github/kris-brown/CombinatorialEnumeration.jl/blob/gh-pages/dev" config["repo_root_url"] = "https://github.com/kris-brown/CombinatorialEnumeration.jl/blob/main/docs" end makedocs( sitename = "CombinatorialEnumeration", format = Documenter.HTML(), modules = [CombinatorialEnumeration] ) @info "Deploying docs" deploydocs( target = "build", repo = "github.com/kris-brown/CombinatorialEnumeration.jl.git", branch = "gh-pages", devbranch = "main" )
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
14608
using ..Models: is_surjective """ Compute a normal form for IntDisjointSets so that equivalent ones can be identified """ function norm_eq(x::IntDisjointSets{Int})::Vector{Int} clsses = Vector{Union{Int,Nothing}}(fill(nothing, length(x))) Vector{Int}(map(1:length(x)) do i eqc = find_root(x,i) if isnothing(clsses[eqc]) clsses[eqc] = length(clsses) - count(isnothing, clsses) + 1 end clsses[eqc] end) end # Colimits ########## """ Propagate cocone information for cocones that are not frozen. If the previous change added a leg element (which may have frozen the leg) we want to run the checks again, even though it appears as if the diagram is frozen. """ function propagate_cocones!(S::Sketch,J::SketchModel,f::CSetTransformation,ch::Change) res = Change[] for (i,cc) in enumerate(S.cocones) legcond = any(l->!is_surjective(f[l]), unique(last.(cc.legs))) if legcond || !([cc.apex,vlabel(cc)...] ⊆ J.aux.frozen[1] && ((last.(cc.legs) ∪ elabel(cc)) ⊆ J.aux.frozen[2])) append!(res, propagate_cocone!(S, J, f, i, ch)) end end res end """ Rebuild the cocone equivalence classes (across different tables) from scratch. This could be made more incremental using the change + the old cocone data """ function update_cocones!(S::Sketch,J::SketchModel,f::ACSetTransformation,ch::Change) new_cocones = map(zip(S.cocones, J.aux.cocones)) do (c, (_,_)) # create new aggregation of all tables in the cocone diagram cdict = Tuple{Symbol, Int, Int}[] for (vi,v) in enumerate(vlabel(c.d)) for p in parts(J.model, v) push!(cdict, (v, vi, p)) end end cdict_inv = Dict([v=>i for (i,v) in enumerate(cdict)]) new_eq = IntDisjointSets(length(cdict)) # Merge elements based on leg into apex being the same ldict = [l=>[ti for (ti, l_) in c.legs if l_==l] for l in unique(last.(c.legs))] for (l,ltabs) in filter(x->length(x[2])>1, ldict) ref_inds = findall(x->x[2]==first(ltabs), cdict) for ltab in ltabs[2:end] for (i,j) in zip(ref_inds, findall(x->x[2]==ltab, cdict)) union!(new_eq, i, j) end end end # merge elements based on eq for (v, veq) in J.aux.eqs for eqset in collect.(eq_sets(veq; remove_singles=true)) e1, rest... = collect(eqset) [union!(new_eq, cdict_inv[v=>e1], cdict_inv[v=>i]) for i in rest] end end # Quotient by maps in the diagram for h in unique(elabel(c.d)) sT, tT, (hsrc, htgt) = src(S,h), tgt(S,h), add_srctgt(h) for (s,t) in zip(J.model[hsrc], J.model[htgt]) for (i,(sT_,_,s_)) in enumerate(cdict) if (sT_,s_) == (sT,s) for (j,(tT_,_,t_)) in enumerate(cdict) if (tT_,t_) == (tT,t) union!(new_eq, i, j) end end end end end end return new_eq => cdict end J.aux.cocones = new_cocones end """ We assume that the cocone data (of connected components in the category of elements) has already been updated in update_cocones!. There are two ways to perform cocone constraint inference: 1.) If two elements map to the same cocone element that are in a connected component, then the cocone elements must be merged. 2.) If two objects in distinct connected components map to the same cocone apex element, then we must fail if it is not possible for some future assignment of foreign keys to put them in the same connected component. """ function propagate_cocone!(S::Sketch, J::SketchModel,f::CSetTransformation, ci::Int, c::Change) verbose = false cc, (ccdata, cd), res = S.cocones[ci], J.aux.cocones[ci], Change[] if verbose println("updating cocone $ci with frozen $(J.aux.frozen) apex $(cc.apex) po data $(J.aux.cocones) and ") show(stdout,"text/plain",crel_to_cset(S, J.model)[1]) end # We care about, ∀ apexes, which connected components map to it ap_to_cc = DefaultDict(()->Set{Int}()) # ap₁ -> [cc₁,cc₂,...] # We care about, ∀ connected components, which apexes are mapped to cc_to_ap = DefaultDict(()->Set{Int}()) # cc₁ -> [ap₁, ap₂,...] for (ccdata_i, (t,ti,v)) in enumerate(cd) for (_,l) in filter(x->x[1]==ti, cc.legs) l_src, l_tgt = add_srctgt(l) for ap in J.model[incident(J.model, v, l_src), l_tgt] ccmp = find_root!(ccdata, ccdata_i) push!(ap_to_cc[ap], ccmp) push!(cc_to_ap[ccmp], ap) end end end frozen_diag = vlabel(cc) ⊆ J.aux.frozen[1] && elabel(cc) ⊆ J.aux.frozen[2] if verbose println("cc_to_ap $cc_to_ap\nap_to_cc $ap_to_cc") end # 1.) check for apex elements that should be merged for vs in collect.(filter(x->length(x)>1, collect(values(cc_to_ap)))) if cc.apex ∈ J.aux.frozen[1] throw(ModelException("$(cc.apex)#$vs must be merged, but it is frozen")) end if verbose println("MERGING COCONE APEX ELEMS $vs") end push!(res, Merge(S,J,Dict(cc.apex=>[vs]))) end # 2a) if diagram completely determined, we have one apex elem per connected comp if frozen_diag && cc.apex ∉ J.aux.frozen[1] for cc_root in unique(find_root!(ccdata, i) for i in 1:length(ccdata)) if !haskey(cc_to_ap, cc_root) if cc.apex ∈ J.aux.frozen[1] throw(ModelException("Diagram completely determined but connected component $cc_root not matched to apex")) end if verbose println("New cc_root that is unmatched $cc_root") end newL, newI = S.crel(), S.crel() ipartdict = Dict() ILd, IRd = [DefaultDict{Symbol,Vector{Int}}(()->Int[]) for _ in 1:2] add_part!(newL, cc.apex) ccinds = [i for i in 1:length(ccdata) if find_root!(ccdata, i)==cc_root] for (cctab, cctabi, ccind) in cd[ccinds] if ccind ∉ collect(IRd[cctab]) # don't repeat when same table appears in diagram multiple times for (_,l) in filter(x->x[1]==cctabi, cc.legs) lsrctgt = add_srctgt(l) if haskey(ipartdict, cctab=>ccind) (ipart,lpart) = ipartdict[cctab=>ccind] else ipart = add_part!(newI, cctab) lpart = add_part!(newL, cctab) ipartdict[cctab=>ccind] = ipart => lpart push!(ILd[cctab], lpart) push!(IRd[cctab], ccind); end add_part!(newL, l; Dict(zip(lsrctgt, [lpart, 1]))...) end end end IL = ACSetTransformation(newI,newL; ILd...) IR = ACSetTransformation(newI,J.model; IRd...) ad = Addition(S,J,IL,IR) push!(res, ad) else # set all legs if not yet determined for (cctab, cctabi, ccind) in cd[[i for i in 1:length(ccdata) if find_root!(ccdata, i) == cc_root]] for l in filter(l->src(S,l)==cctab,unique(last.(legs(cc)))) lsrc,ltgt = add_srctgt(l) if isempty(incident(J.model, ccind, lsrc)) error("""We're expecting the legs to be filled already... we can fill this in if that's not the case though""") end end end end end end # cardinality checks if the apex # is known if cc.apex ∈ J.aux.frozen[1] startJ = project(S,merge_eq(S,J.model,J.aux.eqs), cc) mn, mx = [minmax_groups(S,startJ,J.aux.frozen, cc, ccdata, cd; is_min=x) for x in [true,false]] if verbose println("mn $mn -- parts $(nparts(J.model, cc.apex)) -- mx $mx\n") end if !(mn <= nparts(J.model, cc.apex) <= mx) throw(ModelException("mn $mn <= #$(cc.apex) $(nparts(J.model, cc.apex)) <= mx $mx")) end end # 2.) check for connected components that cannot possibly be merged startJ = project(S,merge_eq(S,J.model,J.aux.eqs), cc) if verbose println("check for connected components that cannot possibly be merged") println("values(ap_to_cc) $(collect(values(ap_to_cc)))") end for vs in collect.(filter(x->length(x)>1, collect(values(ap_to_cc)))) # conservative approach - don't try anything if tables not frozen # TODO revisit this assumption, maybe something can still be inferred? if vlabel(cc) ⊆ J.aux.frozen[1] if !connection_possible(S, startJ, cc, ccdata, cd, vs) throw(ModelException("Connected components cannot possibly be merged")) end end end res end """ The minimum # of connected components in a colimit diagram (or maximum) This would be a simple branching search problem except we'd like to be able to reason even about tables that are not yet frozen (could grow or merge). If the diagram is a DAG with loops, we can say that, if there exists an unfrozen table joining two other tables, that it's possible for all the elements to be collapsed into one group (in case we are trying to minimize groups) and, if there exists an unfrozen table that is terminal, that there could exist MAXINT groups. """ function minmax_groups(S::Sketch,J_orig::StructACSet,freeze,cc::Cone, conn_orig::IntDisjointSets{Int}, cd::Vector{Tuple{Symbol,Int,Int}}; is_min::Bool=true) verbose = false cc.is_dag || error("This only works with dag cocones") ofreeze,hfreeze = freeze legtabs = unique(first.(cc.legs)) connd = Dict(v=>k for (k,v) in enumerate(cd)) n_g = num_groups(conn_orig) minmax = is_min ? min : max J_orig = deepcopy(J_orig) d_no_loop = deepcopy(cc.d) # For an unfrozen object, the table may be empty, so we cannot empty_unfrozen_dict = Dict() # Table #2 ↦ indices [3,4,5] in cd, for example tab_dict = DefaultDict{Int,Vector{Int}}(()->Int[]) for (i,(_,v,_)) in enumerate(cd) push!(tab_dict[v], i) end tab_colors(con, tab::Int) = [find_root!(con, i) for i in tab_dict[tab]] rem_edges!(d_no_loop, [i for (i,(s,t)) in enumerate(zip(cc.d[:src],cc.d[:tgt])) if s==t]) poss = [deepcopy(conn_orig)] if verbose println("$(is_min ? "min" : "max") # of connected components in $cd ? initial n_g $n_g "); show(stdout,"text/plain",crel_to_cset(S,J_orig)[1]) end # an unfrozen table that has a leg into the apex could have any number of # things, so the max number of groups is anything. if !is_min && cc.d[legtabs, :vlabel] ⊈ ofreeze return typemax(Int) end if verbose println("\tlegtabs $legtabs ⊈ ofreeze $ofreeze") end for v in reverse(vertices(cc.d)) new_poss = [] sTab = cc.d[v, :vlabel] e_is = setdiff(incident(cc.d, v, :src), refl(cc.d)) if isempty(e_is) continue end es, t_is = cc.d[e_is, :elabel], cc.d[e_is, :tgt] tTabs = cc.d[t_is, :vlabel] if verbose println("\tconsidering $sTab#$v w/ es $es") end for conn in poss tCols = [tab_colors(conn, tTab) for tTab in t_is] if sTab ∉ ofreeze # Union everything below if we hit an unfrozen table and are MINIMIZing if is_min aps = [connd[x] for x in filter(x->x[2]∈[v,t_is...], cd)] if isempty(aps) nothing # nothing to do? else ap1, all_parts... = aps for ap in all_parts union!(conn, ap1, ap) end end else # assume the unfrozen table gets quotiented to 1 element if MAXXing # so we only need to consider options that merge one element per targ seen = Dict() for combo in Base.product(tCols...) push!(new_poss, deepcopy(conn)) c1, cs... = combo for c in cs union!(new_poss[end], c1, c) end end end else # branch on all possible FK assignments tCols = map(tab_dict[v]) do src_i (tab_check,tab_i_check,s_part) = cd[src_i] tab_check == sTab || error("$tab_check != $sTab") v == tab_i_check || error("$v != $tab_i_check") s_eqc = find_root!(conn, src_i) vcat(map(zip(es,t_is)) do (e, t_i) e_src, e_tgt = add_srctgt(e) if isempty(incident(J_orig, s_part, e_src)) [s_eqc=>t_eqc for t_eqc in unique(tab_colors(conn, t_i))] else [nothing] end end...) end for combo in Base.product(tCols...) push!(new_poss, deepcopy(conn)) for c in filter(x->!isnothing(x),combo) union!(new_poss[end], c[1],c[2]) # error("to implement: Tcols $tCols combo $combo ($c)") end end end n_g = minmax(n_g, num_groups(conn)) end poss = unique(norm_eq, [poss..., new_poss...]) end for conn in poss n_g = minmax(n_g, num_groups(conn)) end if verbose println("\t**returning $n_g**\n\n") end return n_g end """ Check if there exists a foreign key assignment that connects n sets of elements in a C-Set. """ function connection_possible(S::Sketch,J_orig::StructACSet, cocone::Cone, conn_orig::IntDisjointSets{Int}, conndict::Vector{Tuple{Symbol,Int,Int}}, comps::Vector{Int}) verbose = false if verbose println("are $comps mergable ?") end connd = Dict([v=>i for (i,v) in enumerate(conndict)]) is_tot(J,e) = length(unique(J[add_src(e)])) == nparts(J, src(S,e)) queue = [J_orig=>conn_orig] while !isempty(queue) J, conn = pop!(queue) if verbose println("popping $conn (in queue: $(length(queue)))") end es = [e for e in unique(elabel(cocone.d)) if !is_tot(J,e)] if isempty(es) continue end # this branch cannot branch further e = first(es) # or some more intelligent way to pick one? for e_i in incident(cocone.d, e, :elabel) s_i, t_i = src(cocone.d, e_i), tgt(cocone.d, e_i) (se, te), sT, tT = add_srctgt(e), src(S,e), tgt(S,e) if verbose println("\tbranch on $e:$sT->$tT") end undefd = first(collect(setdiff(parts(J, sT), J[se]))) u_ind = connd[(sT,s_i,undefd)] u_cc = find_root!(conn, u_ind) if verbose println("\tlooking for fk targets for $sT#$undefd") end for p in parts(J,tT) # TODO we actually only need to consider distinct orbits if verbose println("\tconsidering ->$tT#$p") end p_ind = connd[(tT,t_i,p)] p_cc = find_root!(conn, p_ind) if u_cc != p_cc J_, conn_ = deepcopy.([J, conn]) add_part!(J_, e; Dict(zip([se,te],[undefd,p]))...) union!(conn_, u_ind, p_ind) if length(unique([find_root!(conn_, x) for x in comps])) == 1 return true else push!(queue, J_=>conn_) end end end end end return false # no more branching options, comps still unconnected end
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
391
module CombinatorialEnumeration using Reexport include(joinpath(@__DIR__, "Sketches.jl")) include(joinpath(@__DIR__, "Models.jl")) include(joinpath(@__DIR__, "DB.jl")) include(joinpath(@__DIR__, "Propagate.jl")) include(joinpath(@__DIR__, "ModEnum.jl")) @reexport using .Sketches @reexport using .Models @reexport using .DB @reexport using .Propagate @reexport using .ModEnum end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
7088
using Catlab.WiringDiagrams using ..Sketches: add_id using ..Models: frozen_hom # Limits ######## function propagate_cones!(S::Sketch, J::SketchModel, f::CSetTransformation, ch::Change) # vcat([propagate_cone!(S, J, f, i, ch) for i in 1:length(S.cones)]...) verbose = false res = Change[] for (i,c) in enumerate(S.cones) legcond = any(l->!is_surjective(f[l]), vcat(elabel(c),vlabel(c))) if legcond || !([c.apex,vlabel(c)...] ⊆ J.aux.frozen[1] && ((last.(c.legs) ∪ elabel(c)) ⊆ J.aux.frozen[2])) append!(res, propagate_cone!(S, J, f, i, ch)) else if verbose println("skipping cone $i w/ frozen $(J.aux.frozen)") end end end res end """ Propagate info related to a cone. For example: a cone object D over a cospan B -> A <- C (i.e. a pullback) Imagine all sets have three elements. If b₁ and b₂ are mapped to a₁ and c₁ and c₂ are mapped to a₁ and a₂ (with c₃'s maps unknown), then a conjunctive query looking for instances of the diagram should return: QueryRes A B C ---------------- 1 a₁ b₁ c₁ 2 a₁ b₂ c₁ Because the functions are partial in the premodel, there may be limit objects that will be discovered to exist (by merging elements or adding new connections) So the query result is a *lower* bound on the number of elements in the apex. This means we expect there to be at least two objects in the limit object D. If an element already exists with the same legs, then we are good. Otherwise we need to add a new element. It would be great to search for patterns incrementally. However, even a basic version of this (i.e. search for limits in a region that has 'changed') requires *incremental graph pattern matching*, which is not yet implemented in Catlab. It seems likely that there are even smarter kinds of search one can do depending on whether it was a merge or an addition, but this has yet to be worked out. If we merge together two apex elements, then we must merge together the corresponding leg values. --- If we merge together two cone diagram elements, then we may have to merge together two apex elements. We may also create new limits (and have to create new apex elements). --- If we add an apex element, we need to determine if (including the rest of the addition) it is possible for more limits to appear. If we really kept track of all possible limits (given an uncertain diagram), then we could set the legs (if there were exactly one), fail (if there were zero), otherwise do nothing. A conservative approach now is to say that there are zero remaining possible cones if the diagram is fully determinate (all FKs are known). --- If we add *diagram* elements, new cones apex elements may be induced. --- Unfortunately we do not use the fact we know the incremental change between models to do this more intelligently. If the apex and diagram objects/maps are frozen but not the legs, our query results should match up one-to-one with apex elements. However, we may not know how they should be matched, so the cone can actually generate a list of possibilities to branch on (in addition to changes that must be executed). - This isn't yet supported and will fail. """ function propagate_cone!(S::Sketch, J_::SketchModel, m::CSetTransformation, ci::Int, ch::Change) res = Change[] cone_ = S.cones[ci] ap = cone_.apex idap = add_id(ap) J, M0 = J_.model, dom(m) # new / old model verbose = false if verbose println("updating cone $ap with m[apex] $(collect(m[ap]))") show(stdout,"text/plain", J) println(J_.aux.eqs) end if (vlabel(cone_) ∪ [ap]) ⊆ J_.aux.frozen[1] && elabel(cone_) ⊆ J_.aux.frozen[2] && last.(cone_.legs) ⊈ J_.aux.frozen[2] msg = "Frozen $(J_.frozen) apex $ap vs $(vlabel(cone_)) es $(elabel(cone_))" error("Cones w/ frozen apex + diagram but unfrozen legs unsupported\n$msg") end # Merged cone elements induced merged values along their legs cones = DefaultDict{Vector{Int},Vector{Int}}(()->Int[]) for c in parts(J_.model, ap) pre = preimage(m[ap], c) if length(pre) > 1 for legedge in filter(!=(idap),cone_.ulegs) tgttab = tgt(S, legedge) legvals = Set([find_root!(J_.aux.eqs[tgttab], m[tgttab](fk(S,M0, legedge,p))) for p in pre]) if length(legvals) > 1 str = "merging leg ($ap -> $legedge -> $tgttab) vals $legvals" if verbose println(str) end push!(res, Merge(S,J_,Dict([tgttab=>[legvals]]))) end end end quot_legs = map(last.(cone_.legs)) do x y = x == idap ? c : fk(S,J_,x,c) return isnothing(y) ? nothing : find_root!(J_.aux.eqs[tgt(S,x)],y) end if !any(isnothing, quot_legs) push!(cones[quot_legs], c) end end # Merged leg values induced merged cone elements for quot_cones in filter(x->length(x)>1, collect(values(cones))) eqcs = collect(Set([find_root!(J_.aux.eqs[ap],x) for x in quot_cones])) if length(eqcs) > 1 if verbose println("Merging cone apexes $eqcs") end push!(res, Merge(S,J_, Dict([ap=>[eqcs]]))) end end # UNDERESTIMATE of cones in the new model if nv(cone_.d) == 0 length(cones)==1 || throw(ModelException("Wrong number of 1 objects")) return res end sums = Addition[] query_res = nv(cone_.d) == 0 ? () : query(J, cone_.uwd) mult_legs = collect(filter(x->length(x) > 1, collect(values(cone_.leg_inds)))) new_cones = Dict{Vector{Int},Union{Nothing,Int}}() for qres_ in unique(collect.(zip(query_res...))) skip = false qres = [find_root!(J_.aux.eqs[tgt(S,l)], qres_[i]) for (i,l) in cone_.legs] if verbose println("qres_ $qres_ qres $qres") end mult_leg_viol = [vs for vs in mult_legs if length(unique(qres_[vs])) > 1] if !isempty(mult_leg_viol) skip |= true if all(l->frozen_hom(S,J_,l), last.(cone_.legs)) && !haskey(cones,qres) ls = last.([first(cone_.legs[vs]) for vs in mult_leg_viol]) throw(ModelException("Identical legs $ls should point to same element: qres_ $qres_")) end end if skip continue end # Add a new cone if not seen before if !haskey(cones, qres) I, L = S.crel(), S.crel() IJd, ILd = [DefaultDict(()->Int[]) for _ in 1:2] add_part!(L, ap) lrmap = Dict{Pair{Symbol, Int}, Int}() for (res_v, l) in filter(x->x[2]!=idap,collect(zip(qres, last.(cone_.legs)))) ls, lt = add_srctgt(l) legtab = tgt(S,l) lr = legtab=>res_v if haskey(lrmap, lr) tr = lrmap[lr] else add_part!(I, legtab) tr = add_part!(L, legtab) push!(IJd[legtab], res_v) push!(ILd[legtab], tr) lrmap[lr] = tr end add_part!(L, l; Dict([ls=>1,lt=>tr])...) end IJ = ACSetTransformation(I, J; IJd...) IL = ACSetTransformation(I, L; ILd...) if verbose println("Adding a new cone with legs $qres") end push!(sums, Addition(S, J_, IL, IJ)) new_cones[qres] = nothing end end if !isempty(sums) push!(res, merge(S,J_, sums)) end res end
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
3813
module DB export init_db, init_premodel, add_premodel, get_model, EnumState, Prop, MergeEdge,AddEdge, Init, Branch import ..Sketches: show_lg """ Interact an in-memory datastore We formerly supported a postgres backend when the scale is beyond computer memory (or we want to serialize results to be used much later). This could be reimplemented if needed. """ using ..Sketches using ..Models using Catlab.CategoricalAlgebra using CSetAutomorphisms ############################# abstract type EdgeChange end """ No change is done to the input: these are just the additional changes that were discovered while propagating an add/merge change """ struct MergeEdge <: EdgeChange merge::Merge queued::Addition m::ACSetTransformation end struct AddEdge <: EdgeChange add::Addition m::ACSetTransformation end struct Branch <: EdgeChange add::Addition m::ACSetTransformation end struct Init <: EdgeChange add::Addition m::ACSetTransformation end to_symbol(::Init) = :I to_symbol(::AddEdge) = :A to_symbol(::MergeEdge) = :M to_symbol(::Branch) = :B # DB alternative: local memory """ grph - relation between models. Edges are either branch, add, or merge premodels - partially filled out models, seen so far, indexed by their hash pk - vector of hash values for each model seen so far prop - vector of data that is generated from propagating constraints on a premodel ms - a morphism for each edge fail - subset of premodels which fail models - subset of premodels which are complete. This can be less efficiently computed on the fly as models which (to be used in the future) fired - subset of *edges* which have been processed to_branch - propagating constraints may give us something to branch on that's better than the generic 'pick a FK undefined for a particular input' """ mutable struct EnumState grph::LabeledGraph premodels::Vector{SketchModel} ms::Vector{EdgeChange} pk::Vector{String} fail::Set{Int} models::Set{Int} prop::Vector{Union{Nothing,Tuple{AuxData, Addition, Merge}}} # to_branch::Vector{Any} function EnumState() return new(LabeledGraph(),SketchModel[], EdgeChange[], String[], Set{Int}(), Set{Int}(), Any[]) end end show_lg(es::EnumState) = show_lg(es.grph) Base.length(es::EnumState) = length(es.premodels) Base.getindex(es::EnumState, i::Int) = es.premodels[i] Base.getindex(es::EnumState, i::String) = es.premodels[findfirst(==(i), es.pk)] function add_premodel(es::EnumState, S::Sketch, J::SketchModel; parent::Union{Nothing,Pair{Int,E}}=nothing)::Int where {E <: EdgeChange} naut = call_nauty(J.model) found = findfirst(==(naut.hsh), es.pk) if !isnothing(found) new_v = found else push!(es.premodels, J) push!(es.prop, nothing) push!(es.pk, naut.hsh) new_v = add_part!(es.grph, :V; vlabel=Symbol(string(length(es.pk)))) end if !isnothing(parent) p_i, p_e = parent add_part!(es.grph, :E; src=p_i, tgt=new_v, elabel=to_symbol(p_e)) push!(es.ms, p_e) end return new_v end init_premodel(S::Sketch, ch::StructACSet, freeze=Symbol[]) = init_premodel(EnumState(), S, ch, freeze) function init_premodel(es::EnumState, S::Sketch, ch::StructACSet, freeze=Symbol[]) for o in [c.apex for c in S.cones if nv(c.d)==0 && nparts(ch, c.apex) == 0] add_part!(ch, o) end J = create_premodel(S, Dict(), freeze) i = add_premodel(es, S, J) ch = cset_to_crel(S, ch) ad = Addition(S, J, homomorphism(J.model,ch;monic=true), id(J.model)) m = exec_change(S, J.model, ad) J.model = codom(m) J.aux.frozen = (J.aux.frozen[1] ∪ freeze) => J.aux.frozen[2] add_premodel(es, S, J; parent=i=>Init(ad,m)) return es end get_model(es::EnumState, S::Sketch, i::Int)::StructACSet = first(crel_to_cset(S, es[i])) end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
3135
# Functionality ############### """ Note which elements are equal due to relations actually representing functions a₁ -> b₁ a₂ -> b₂ a₁ -> b₃ a₃ -> b₄ Because a₁ is mapped to b₁ and b₃, we deduce b₁=b₃. If the equivalence relation has it such that a₂=a₃, then we'd likewise conclude b₂=b₄ --- If we merge elements in the domain of a function f, we look at all the elements that share their equivalence classes. We look at all of the equivalence classes that are mapped to by f and merge those. """ function quotient_functions!(S::Sketch, J::SketchModel, h::CSetTransformation, m::Merge) L, I = codom(m.l), apex(m) res = Merge[] for v in vlabel(S) for eqc in parts(L, v) els = preimage(m.l[v], eqc) if length(els) > 1 # get everything equivalent to these elements, *including* new info r_eqcs = Set([find_root!(J.aux.eqs[v], e) for e in (m.r ⋅ h)[v](els)]) r_els = findall(i->find_root!(J.aux.eqs[v], i) ∈ r_eqcs, parts(J.model, v)) for h in hom_out(S, v) s, t = add_srctgt(h) targ = tgt(S,h) ts = J.model[vcat(incident(J.model, r_els, s)...), t] t_eqcs = Set([find_root!(J.aux.eqs[targ], i) for i in ts]) if length(t_eqcs) > 1 push!(res, Merge(S, J, Dict([targ=>[collect(t_eqcs)]]))) end end end end end return res end """ For each instance of a relation we add, we must check whether its domain has been mapped to something. If it's mapped to something in a different equivalence class, merge. """ function quotient_functions!(S::Sketch, J_::SketchModel, h::CSetTransformation, ad::Addition) verbose = false if verbose println("quotienting with h=$(Any[k=>v for (k,v) in pairs(components(h)) if !isempty(collect(v))]) and ad $ad ") show(stdout,"text/plain",J_.model) println("addition L $(Any[k=>v for (k,v) in pairs(components(ad.l)) if !isempty(collect(v))])") end L, I = codom(ad.l), apex(ad) res = Merge[] J = J_.model for (d, srcobj, tgtobj) in elabel(S, true) dsrc, dtgt = add_srctgt(d) # We don't care about newly introduced srcs. # (But should we care about newly introduced srcs which have # multiple newly-introduced outgoing FKs?) for e in parts(L, d) if verbose println("d $d:$srcobj->$tgtobj #$e ad.l[srcobj] $(ad.l[srcobj])") end i_src = preimage(ad.l[srcobj], L[e, dsrc]) if !isempty(i_src) # For such a relation, get the model element corresponding to the src s = (ad.r ⋅ h)[srcobj](only(i_src)) rel = incident(J, s, dsrc) # get all the relations the source has already # Get the eq classes of things the source is related to t_eqcs = Set([find_root!(J_.aux.eqs[tgtobj], t) for t in J[rel, dtgt]]) if length(t_eqcs) > 1 if verbose println("isrc $i_src t_eqcs $t_eqcs") end if tgtobj ∈ J_.aux.frozen[1] throw(ModelException("Functionality imposs")) end push!(res, Merge(S, J_, Dict([tgtobj=>[collect(t_eqcs)]]))) end end end end return res end
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
6889
module ModEnum export chase_db, test_models, init_db using ..Sketches using ..Models using ..DB using ..Propagate using ..Models: eq_sets, is_total using Catlab.CategoricalAlgebra, Catlab.Theories using CSetAutomorphisms using Test using Combinatorics """ Add, then apply merges (while accumulating future adds to make) until fixpoint We pass in the pending additions into propagate! so that we avoid infinite loops (otherwise we keep trying to add something that needs to be added, even though we've already queued it up to be added.) """ function prop(es::EnumState, S::Sketch, e::Int, ec::Union{Init,AddEdge,Branch}) verbose = false t = tgt(es.grph, e) J = deepcopy(es[t]) m_change, a_change = propagate!(S, J, ec.add, ec.m) es.prop[t] = (J.aux, a_change, m_change) # record the result of prop if all(is_no_op,[a_change,m_change]) && !last(crel_to_cset(S,J.model)) push!(es.models,t) # found a model end return nothing end function prop(es::EnumState, S::Sketch, e::Int, ec::MergeEdge) verbose = false t = tgt(es.grph, e) ec = es.ms[e] queued, ch = ec.queued, ec.merge J = deepcopy(es[t]) queued_ = update_change(S,J,ec.m, queued) m_change, a_change = propagate!(S, J, ch, ec.m; queued=queued_) codom(queued_.r) == codom(a_change.r) || error("HERE") es.prop[t] = (J.aux, merge(S,J, queued_, a_change), m_change) # record the result of prop if all(is_no_op,[a_change,m_change]) && !last(crel_to_cset(S,J.model)) push!(es.models,t) # found a model end return nothing end """ Run additions until there's nothing to add or merge. I.e. go as far as you can w/o branching. Initialize loop with an Addition edge that has not yet been propagated. (unknown if this will enter an infinite loop and that we have to branch) """ function add!(es::EnumState, S::Sketch, e::Int; force=false) verbose = false while true e_next = add_merge!(es, S, e) if e == e_next break end e = e_next end return e end """ Pick a FK + source element to branch on, if any. This has some loose heuristics for which morphism to choose. It favors morphisms between frozen objects over morphisms with unfrozen objects. We should probably bias cocone legs over cone legs (which get derived automatically from the data in their diagram). No heuristic is currently used to pick which element (of the ones without the FK defined) gets branched on. """ function find_branch_fk(S::Sketch, J::SketchModel)::Union{Nothing, Pair{Symbol,Int}} score(f) = sum([src(S,f)∈J.aux.frozen[1], tgt(S,f)∈J.aux.frozen[1]]) + (any(c->f ∈ last.(c.legs), S.cones) ? -0.5 : 0) fs = map(setdiff(elabel(S), J.aux.frozen[2])) do f for p in parts(J.model, src(S,f)) if isempty(incident(J.model, p, add_src(f))) return f => p end end return nothing end dangling = [score(fi[1])=>fi for fi in fs if !isnothing(fi)] return last(last(sort(dangling))) end """ Get a list of changes to branch on, corresponding to possible assignments of a FK. We should not be branching on things that have nontrivial equivalences in J.eqs. """ function branch_fk(es, S::Sketch, i::Int) aux = es.prop[i][1] J = SketchModel(es[i].model, aux) branch_m, branch_val = find_branch_fk(S, J) !isnothing(branch_m) || error("Do not yet support branching on anything but FKs") ttab = tgt(S,branch_m) for t in vcat(ttab ∉ J.aux.frozen[1] ? [0] : [], parts(J.model, ttab)) c = add_fk(S,J,branch_m,branch_val,t) J_ = deepcopy(J) m = exec_change(S, J.model, c) J_.model = codom(m) add_premodel(es, S, J_, parent=i=>Branch(c, m)) end end # """ # Pick a premodel and apply all branches, storing result back in the db. # Return the premodel ids that result. Return nothing if already fired. # Optionally force branching on a particular FK. # """ function chase_db_step!(S::Sketch, es::EnumState, e::Int) verbose = false change = false s, t = src(es.grph, e), tgt(es.grph, e) if isempty(incident(es.grph, t, :src)) if t ∉ es.fail ∪ es.models change |= true if isnothing(es.prop[t]) if verbose println("propagating target $t") end try prop(es,S,e, es.ms[e]) catch a_ModelException if a_ModelException isa ModelException push!(es.fail, t) if verbose println("\tMODELEXCEPTION: $(a_ModelException.msg)") end else println("ERROR AT $t") throw(a_ModelException) end end else aux, ad, mrg = es.prop[t] J = SketchModel(es[t].model, aux) if is_no_op(mrg) if is_no_op(ad) # we branch b/c no more constraints to propagate if verbose println("branching target $t") end branch_fk(es, S, t) else # we have additions to propagate if verbose println("$t has addition to propagate (nv $(nv(es.grph)))") end J_ = deepcopy(J) m = exec_change(S, J.model, ad) J_.model = codom(m) add_premodel(es,S,J_; parent=t=>AddEdge(ad,m)) end else # we have merges to propagate if verbose println("$t has merge to propagate (nv $(nv(es.grph)))") end J_ = deepcopy(J) m = exec_change(S, J.model, mrg) J_.model = codom(m) add_premodel(es,S,J_; parent=t=>MergeEdge(mrg, ad, m)) end end end end return change end """ Continually apply chase_db_step! while there is work remaining to be done. """ function chase_db(S::Sketch, es::EnumState, n=-1) verbose = true change = true while n!=0 && change n -= 1 change = false for e in edges(es.grph) change |= chase_db_step!(S,es,e) end end end """ Enumerate elements of ℕᵏ Do the first enumeration by incrementing n_nonzero and finding partitions so that ∑(c₁,...) = n_nonzero """ function combos_below(m::Int, n::Int)::Vector{Vector{Int}} res = Set{Vector{Int}}([zeros(Int,m)]) n_const = 0 # total number of constants across all sets for n_const in 1:n for n_nonzero in 1:m # values we'll assign to nodes c_parts = partitions(n_const, n_nonzero) # Which nodes we'll assign them to indices = permutations(1:m,n_nonzero) for c_partition in c_parts for index_assignment in indices v = zeros(Int, m) v[index_assignment] = vcat(c_partition...) push!(res, v) end end end end return sort(collect(res)) end """ We can reason what are the models that should come out, but not which order they are in, so we make sure canonical hashes match up. """ function test_models(db::EnumState, S::Sketch, expected; f=identity, include_one=false) Set(call_nauty(e).hsh for e in expected) == Set( call_nauty(f(get_model(db,S,m))).hsh for m in db.models if include_one || m > 1) end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
25222
module Models export SketchModel, AuxData, create_premodel, crel_to_cset, cset_to_crel, validate!, Addition, Merge, Change, is_no_op, update_changes, update_change, exec_change, rem_dup_relations, has_map, fk, add_fk, ModelException # to do: cut this down to only things end-users would use using Catlab.CategoricalAlgebra, Catlab.Theories import Catlab.CategoricalAlgebra: apex, left, right using ..Sketches import Base: union!, merge using DataStructures using AutoHashEquals #----------------------------------# # Should be upstreamed into catlab # #----------------------------------# is_surjective(f::FinFunction) = length(codom(f)) == length(Set(values(collect(f)))) is_injective(f::FinFunction) = length(dom(f)) == length(Set(values(collect(f)))) function is_injective(α::ACSetTransformation{S}) where {S} for c in components(α) if !is_injective(c) return false end end return true end function is_surjective(α::ACSetTransformation{S}) where {S} for c in components(α) if !is_surjective(c) return false end end return true end image(f) = equalizer(legs(pushout(f,f))...) coimage(f) = coequalizer(legs(pullback(f,f))...) function epi_mono(f) Im, CoIm = image(f), coimage(f) iso = factorize(Im, factorize(CoIm, f)) return ComposablePair(proj(CoIm) ⋅ iso, incl(Im)) end ####### struct ModelException <: Exception msg::String ModelException(msg::String="") = new(msg) end # There is an list element for each element in the root table # Those elements each are of length n, for the n objects in the path_eq diagram # Each of those n elements is a list of the possible values that the table could # be. const EQ = Dict{Symbol, Vector{ Vector{ Union{ Nothing, AbstractVector{Int} } } } } """ Because we cannot yet compute cones incrementally, there is no reason to cache any information related to cones. eqs: Equivalence classes for each object in the model Cocones: equivalence class across all objects in each cocone diagram includes a mapping to clarify which indices correspond to which elems path_eqs: Data-structure capturing, which possible elements there are (for obj in the path eq diagram) for each element of the root object. frozen: whether a table/FK can possibly change. Initially, non-limit objects are frozen. Limit objects become frozen when all the morphisms in their diagrams are frozen. Morphisms are frozen when they are from a frozen object and fully determined. """ @auto_hash_equals mutable struct AuxData eqs::Dict{Symbol, IntDisjointSets{Int}} # cones::Vector{Dict{Vector{Int},Union{Nothing,Int}}} cocones::Vector{Pair{IntDisjointSets{Int}, Vector{Tuple{Symbol,Int,Int}}}} path_eqs::EQ frozen::Pair{Set{Symbol},Set{Symbol}} end """ Data of a premodel plus all the auxillary sketch constraint information """ @auto_hash_equals mutable struct SketchModel{S} model::StructACSet{S} aux::AuxData end """ Create an empty premodel (C-Rel). """ function create_premodel(S::Sketch, n=Dict{Symbol, Int}(), freeze_obs=Symbol[])::SketchModel J = S.crel() keys(n) ⊆ vlabel(S) || error("bad key(s) $(keys(n)|>collect)") # validate freeze_obs ⊆ vlabel(S) || error("bad freeze obs $(freeze_obs)") # validate # handle one_obs one_obs = Set([c.apex for c in S.cones if nv(c.d)==0]) for o in one_obs if haskey(n, o) n[o] == 1 || error("bad") else n[o] = 1 end end # handle zero obs zero_obs = Set([c.apex for c in S.cocones if nv(c.d)==0]) change = true while change # Maps into zero obs are zero obs change = false for z in zero_obs for h in hom_in(S, z) if src(S,h) ∉ zero_obs push!(zero_obs, src(S,h)); change = true end end end end for o in zero_obs if haskey(n, o) n[o] == 0 || error("bad o $o n[o] $(n[o])") else n[o] = 0 end end for (k,v) in collect(n) add_parts!(J, k, v) end lim_obs = Set([c.apex for c in vcat(S.cones,S.cocones)]) freeze_obs = Set(freeze_obs ∪ one_obs ∪ zero_obs) freeze_arrs = Set{Symbol}([hom_out(S,collect(zero_obs))...,add_id.(vlabel(S))...]) eqs = Dict([o=>IntDisjointSets(nparts(J, o)) for o in vlabel(S)]) cocones = Vector{Pair{IntDisjointSets{Int}, Vector{Tuple{Symbol,Int,Int}}}}( map(S.cocones) do c tabs = vcat(map(enumerate(vlabel(c.d))) do (iv,v) Tuple{Symbol,Int,Int}[(v,iv,i) for i in parts(J,v)] end...) ids = IntDisjointSets(length(tabs)) ldict = [l=>[ti for (ti, l_) in c.legs if l_==l] for l in unique(last.(c.legs))] for (l,ltabs) in filter(x->length(x[2])>1, ldict) ref_inds = findall(x->x[2]==first(ltabs), tabs) for ltab in ltabs[2:end] for (i,j) in zip(ref_inds, findall(x->x[2]==ltab, tabs)) union!(ids, i, j) end end end return ids => tabs end) path_eqs = EQ(map(collect(S.eqs)) do (k,g) k=>map(parts(J,k)) do p map(enumerate(vlabel(g))) do (i,v) if i == 1 return [p] elseif v ∈ freeze_obs return parts(J,v) else return nothing end end end end) return SketchModel(J,AuxData(eqs,cocones,path_eqs, freeze_obs=>freeze_arrs)) end """ A premodel that does not have correct cone/cocone/patheq data. Mainly for testing. """ function test_premodel(S::Sketch, J::StructACSet{Sc}; freeze=Symbol[]) where Sc for c in filter(c->nv(c.d) == 0, S.cones) if nparts(J, c.apex) == 0 add_part!(J, c.apex) end end J_ = create_premodel(S, Dict(k=>nparts(J,k) for k in vlabel(S)), freeze) Jrel = cset_to_crel(S,J) ad = Addition(S,J_,homomorphism(J_.model,Jrel;monic=true),id(J_.model)) J_.model = codom(exec_change(S,J_.model,ad)) # TODO fix cocones/patheqs to first appx? return J_ end """ Convert a premodel (C-Rel) to a model C-Set. Elements that are not mapped by a relation are given a target value of 0. If this happens at all, an output bool will be true If the same element is mapped to multiple outputs, an error is thrown. """ crel_to_cset(S::Sketch, J::SketchModel) = crel_to_cset(S,J.model) function crel_to_cset(S::Sketch, J::StructACSet)::Pair{StructACSet, Bool} res = S.cset() for o in S.schema[:vlabel] add_parts!(res, o, nparts(J, o)) end partial = false for m in elabel(S) msrc, mtgt = add_srctgt(m) length(J[msrc]) == length(Set(J[msrc])) || error("nonfunctional $J") partial |= length(J[msrc]) != nparts(J, src(S, m)) for (domval, codomval) in zip(J[msrc], J[mtgt]) set_subpart!(res, domval, m, codomval) end end return res => partial end function cset_to_crel(S::Sketch, J::StructACSet{Sc}) where Sc res = S.crel() for o in ob(Sc) add_parts!(res, o, nparts(J,o)) end for h in hom(Sc) for (i, v) in enumerate(J[h]) if v != 0 d = zip(add_srctgt(h),[i,v]) add_part!(res, h; Dict(d)...) end end end res end """ TODO: There are certain things we wish premodels to abide by, regardless of state of information propagation: - Equivalence class morphisms are surjective - The leg data in the (co)cone object is correct. (i.e. if the cone element says leg#1 is value x, then the foreign key (corresponding to leg#1) of corresponding apex element should be x. - There is a bijection between elements in the apex of a (co)cone and the corresponding (co)cone object """ function validate!(S::Sketch, J_::SketchModel) J = J_.model for (c,Jc) in zip(S.cones, J_.cones) nparts(J, c.apex) == nparts(J, :apex) || error("Cone ob not in bijection") # todo end for (c,Jc) in zip(S.cocones, J_.cocones) nparts(J, c.apex) == nparts(Jc, :apex) || error("Cocone ob not in bijection") # todo end end # Changes ######### abstract type Change{S} end apex(c::Change{S}) where S = dom(c.l) # == dom(c.r) left(c::Change{S}) where S = c.l right(c::Change{S}) where S = c.r """ Add elements (but merge none) via a monic partial morphism L↩I↪R, where R is the current model. """ struct Addition{S} <: Change{S} l :: ACSetTransformation{S} r :: ACSetTransformation{S} function Addition(S::Sketch, J::SketchModel, l::ACSetTransformation{Sc}, r::ACSetTransformation{Sc}) where Sc dom(l)==dom(r) || error("addition must be a span") codom(r) == J.model || error("addition doesn't match") map(collect(union(J.aux.frozen...) ∩ (vlabel(S)∪elabel(S)))) do s nd, ncd = nparts(dom(l), s), nparts(codom(l),s) nd <= ncd || error("cannot add $s (frozen): $nd -> $ncd") end is_injective(l) || error("span L must be monic $(components(l))") is_injective(r) || error("span R must be monic $(components(r))") all(is_injective, [l,r]) || error("span must be monic") all(is_natural, [l, r]) || error("naturality") all(e->nparts(dom(l), e) == 0, elabel(S)) || error("No FKs in interface") new{Sc}(deepcopy(l),deepcopy(r)) end end """Easier constructor, when the addition has zero overlap with the old model""" Addition(S, old::SketchModel{Sc}, new::StructACSet{Sc}) where Sc = Addition(S, old, create(new), create(old.model)) Addition(S, old::SketchModel) = Addition(S,old,S.crel()) function Base.show(io::IO, a::Addition{S}) where S body = join(filter(x->!isempty(x), map(ob(S)) do v n = nparts(codom(a.l), v) - nparts(dom(a.l), v) n <= 0 ? "" : "$v:$n" end), ",") print(io, "Addition($body)") end """ We can merge elements (but add none) via a span L↞I↪R, where R is the current model. L contains the merged equivalence classes, and I contains the elements of R that are being merged together. NOTE: we immediately modify the IntDisjointSets to quotient the equivalence classes, allowing the Merge information to be used immediately in inferring (co)cones/patheqs/etc. However, we don't immediately perform the merge. We want to know which two distinct things got merged in the later procedure of inferring how (co)cones *change* from the merge. """ struct Merge{S} <: Change{S} l::ACSetTransformation{S} r::ACSetTransformation{S} function Merge(S::Sketch, J::SketchModel{Sc}, d::Dict{Symbol,Vector{Vector{Int}}}) where Sc I, R = [S.crel() for _ in 1:2] dIR, dIJ = [DefaultDict{Symbol, Vector{Int}}(()->Int[]) for _ in 1:2] keys(d) ⊆ vlabel(S) || error(keys(d)) for (k, vvs) in collect(d) allvs = vcat(vvs...) length(allvs) == length(Set(allvs)) || error("Merge not disjoint $k $vvs") minimum(length.(vvs)) > 1 || error("Merging single elem $k $vvs") add_parts!(I, k, length(allvs)) for (r, vs) in enumerate(vvs) append!(dIJ[k], vs) append!(dIR[k], fill(add_part!(R, k), length(vs))) # Quotient the eq classes immediately for vs in filter(x->length(x)>1, vvs) for (v1, v2) in zip(vs, vs[2:end]) union!(J.aux.eqs[k], v1, v2) end end end end ir = ACSetTransformation(I, R; dIR...) ij = ACSetTransformation(I, J.model; dIJ...) for v in vlabel(S) if nparts(I,v) == 1 error(I) end end map(collect(union(J.aux.frozen...)∩(vlabel(S)∪elabel(S)))) do s nd, ncd = nparts(dom(ir), s), nparts(codom(ir),s) nd == ncd || error("cannot merge/add $s (frozen): $nd -> $ncd") end is_surjective(ir) || error("ir $ir") is_injective(ij) || error("ij $ij") all(is_natural, [ir, ij]) || error("naturality") all(e->nparts(I, e) == 0, elabel(S)) || error("No FKs in interface") return new{Sc}(ir, ij) end function Merge(S::Sketch,_::SketchModel,l::ACSetTransformation{Sc},r::ACSetTransformation{Sc}) where Sc dom(l) == dom(r) is_surjective(l) || error("L $l") is_injective(r) || error("R $r") all(is_natural, [l, r]) || error("naturality") all(e->nparts(dom(l), e) == 0, elabel(S)) || error("No FKs in interface") new{Sc}(l,r) end end Merge(S, old::SketchModel) = Merge(S,old,Dict{Symbol,Vector{Vector{Int}}}()) function Base.show(io::IO, a::Merge{S}) where S body = join(filter(x->!isempty(x), map(ob(S)) do v n = [length(preimage(a.l[v], x)) for x in parts(codom(a.l), v)] isempty(n) ? "" : "$v:$(join(n,"|"))" end), ",") print(io, "Merge($body)") end """ Apply a change to CSet. This does *not* update the eqs/(co)cones/patheqs. Just returns a model morphism from applying the change. """ function exec_change(S::Sketch, J::StructACSet{Sc},e::Change )::ACSetTransformation where {Sc} codom(e.r) == J || error("Cannot apply change. No match.") is_natural(e.r) || error(println.(pairs(components(e.r)))) dom(e.l) == dom(e.r) || error("baddom") res = pushout(e.l, e.r) |> collect |> last return res ⋅ rem_dup_relations(S, codom(res)) end function rem_dup_relations(S::Sketch, J::StructACSet) # Detect redundant duplicate relation rows md = Dict{Symbol, Vector{Vector{Int}}}() J2 = typeof(J)() dJJ = Dict{Symbol, Vector{Int}}(pairs(copy_parts!(J2, J; Dict([v=>parts(J,v) for v in vlabel(S)])...))) changed = false for d in elabel(S) # could be done in parallel dJJ[d] = Int[] dsrc, dtgt = add_srctgt(d) dic = Dict() for (i, st) in enumerate(zip(J[dsrc], J[dtgt])) if haskey(dic, st) changed |= true else dic[st] = add_part!(J2, d; Dict(zip([dsrc,dtgt], st))...) end push!(dJJ[d], dic[st]) end md[d] = filter(v->length(v) > 1, collect(values(dic))) |> collect end if !changed return id(J) end return ACSetTransformation(J, J2; dJJ...) end """ It seems like we could just postcompose the right leg of the span with the model update (R₁→R₂), like so: L ↩ I ↪ R₁ ⟶ R₂ However, this leaves us with a span L ↩ I ⟶ R₂, where the right leg is not monic. We want to replace this with an equivalent span that is monic by merging together elements in L that have been implicitly merged by the model update. We first get the *image* of I in R₂, I', which is an epi-mono decomposition. We then take a pushout to obtain our new monic span. L ↩ I ↡ ⌝ ↡ ↘ L' ↩ I' ↪ R (<-- this is the new, monic span) This all applies equally to a span where the left leg is epi, not mono. """ function update_change(S::Sketch, ex::ACSetTransformation, l, r_) all(is_natural, [l,r_,ex]) || error("naturality") # The equivalence class data may have changed in the model due to on-the-fly # merging, but we can recover this by keeping the r = homomorphism(dom(r_), dom(ex); initial=Dict( k=>collect(components(r_)[k]) for k in labels(S))) R = r ⋅ ex I_I, I_R = epi_mono(R) _, I_L = pushout(l, I_I) return I_L, I_R end update_change(S::Sketch, J::SketchModel, ex, a::Addition) = Addition(S, J, update_change(S, ex, a.l, a.r)...) update_change(S::Sketch, J::SketchModel, ex, a::Merge) = Merge(S, J, ex, update_change(S, ex, a.l, a.r)...) update_changes(S::Sketch, J, ex, cs) = map(cs) do c res = update_change(S,J, ex, c) codom(res.r) == J.model || error("failed updated $c \n$ex") return res end eq_class(eq::IntDisjointSets, v::Int) = [i for i in 1:length(eq) if in_same_set(eq, i,v)] """ Check if there exists a map between x and y induced by equivalence classes, i.e. by checking if there is a relation [X]<-X<-f->Y->[Y] """ function has_map(S::Sketch, J_::SketchModel, f::Symbol, x::Int, y::Int)::Bool J = J_.model from_map, to_map = add_srctgt(f) xs, ys = eq_class(J_.eqs[src(S,f)], x), eq_class(J_.eqs[tgt(S,f)], y) !isempty(vcat(incident(J,xs,from_map)...) ∩ vcat(incident(J,ys,to_map)...)) end """ Get something that `x` is related to by `f`, if anything """ function fk(S::Sketch, J::SketchModel, f::Symbol, x::Int) from_map, to_map = add_srctgt(f) xs = eq_class(J.aux.eqs[src(S,f)], x) fs = vcat(incident(J.model,xs,from_map)...) if isempty(fs) return nothing end return find_root!(J.aux.eqs[tgt(S,f)], J.model[first(fs), to_map]) end """ Get f(x) in a premodel (return an arbitrary element that is related by f). Return nothing if f(x) is not yet defined. """ function fk(S::Sketch, J::StructACSet, f::Symbol, x::Int; inv=false) from_map, to_map = add_srctgt(f) for v in filter(v->f==add_id(v), vlabel(S)) return x end if inv to_map,from_map = from_map, to_map end fs = incident(J,x,from_map) if isempty(fs) return nothing end return J[first(fs), to_map] end """Check if a morphism in a premodel is total, modulo equivalence classes""" is_total(S::Sketch, J::SketchModel, e::Symbol) = is_total(S,J.model,J.aux.eqs,e) function is_total(S::Sketch, J::StructACSet, eqs::Dict{Symbol, IntDisjointSets{Int}}, e::Symbol)::Bool e_src = add_src(e) sreps = unique([find_root!(eqs[src(S,e)],x) for x in J[e_src]]) return length(sreps) == num_groups(eqs[src(S,e)]) end fk_in(S::Sketch, J::SketchModel, f::Symbol, y::Int) = fk_in(S,J,f,[y]) function fk_in(S::Sketch, J::SketchModel, f::Symbol, ys::AbstractVector{Int}) if isempty(ys) return [] end from_map, to_map = add_srctgt(f) ys = union([eq_class(J.aux.eqs[tgt(S,f)], y) for y in ys]...) fs = vcat(incident(J.model,ys,to_map)...) xs = [find_root!(J.aux.eqs[src(S,f)], x) for x in J.model[fs, from_map]] return xs |> unique end """ If y is 0, this signals to add a *fresh* element to the codomain. """ function add_fk(S::Sketch,J::SketchModel,f::Symbol,x::Int,y::Int) verbose = false if verbose println("adding fk $f:#$x->#$y") end st = y==0 ? [src(S,f)] : [src(S,f),tgt(S,f)] st_same, xy_same = (src(S,f)==tgt(S,f)), (x == y) I = S.crel(); if st_same&&xy_same add_part!(I, st[1]) is_it = [1,1] else is_it = [add_part!(I, x) for x in st]; end L = deepcopy(I) if y == 0 is_it = [1,add_part!(L, tgt(S,f))] end add_part!(L, f; Dict(zip(add_srctgt(f), is_it))...) IL = homomorphism(I,L; initial=Dict(o=>parts(I,o) for o in vlabel(S))); d = st_same ? st[1]=> (xy_same ? [x] : [x,y]) : zip(st,[[x],[y]]) IR = ACSetTransformation(I, J.model; Dict(d)...) Addition(S,J,IL,IR) end """ Merge two Additions (or possibly Merges, but this hasn't been tested) which may be partially overlapping in their maps into the model. Let Iₒ be the overlap between the two I's, i.e. the pullback. We use this to form the new I and R via pushouts. The map from the new I to the new L is given by the universal property (as the maps to L are a "bigger" pushout square). The map from new I to original R is also given by the same universal property, where we form a commutative square using the original maps Iₙ->R. r₁ I₁↪R Iₒ ↪ I₁ ↪ L₁ Iₒ ↪ I₁ --- ↑⌝ ↑ r₂ ↓ ⌜ ↓ | ↓ ⌜ ↓ | Iₒ↪I₂ I₂ ↪ newI | I₂ ↪ newI | r₁ ↓ !↘⌜ v | !↘⌜ v L₂ -----> newL -------> R r₂ This doesn't generalize to multipushouts/multipullbacks as easily as one would hope. If you have 3 Additions that have only pairwise overlap, Iₒ will be empty. """ merge(S::Sketch, J::SketchModel{X}, xs::AbstractVector{T}) where {X,T} = isempty(xs) ? T(S,J) : reduce((x,y)->merge(S,J,x,y), xs) function merge(S::Sketch, J::SketchModel, a1::Change{Sc},a2::Change{Sc}) where Sc as = [a1,a2] T = a1 isa Addition ? Addition : Merge ls, rs = left.(as), right.(as) Io = pullback(rs) # fail if a1 and a2 point to different models newI = pushout(legs(Io)) ll = [compose(a,b) for (a,b) in zip(legs(Io), ls)] newL = pushout(ll) il = [compose(a,b) for (a,b) in zip(ls,legs(newL))] newIL = universal(newI, Multicospan(il)) newIR = universal(newI, Multicospan(rs)) return T(S,J,newIL,newIR) end # """ # Get the equivalence classes out of an equivalence relation. Pick the lowest # value as the canonical representative. # """ function eq_sets(eq::IntDisjointSets; remove_singles::Bool=false)::Set{Set{Int}} eqsets = DefaultDict{Int,Set{Int}}(Set{Int}) for i in 1:length(eq) push!(eqsets[find_root!(eq, i)], i) end filt = v -> !(remove_singles && length(v)==1) return Set(filter(filt, collect(values(eqsets)))) end """ Applying some changes makes other changes redundant. This detects when we can ignore a change """ is_no_op(ch::Change) = all(f->dom(f)==codom(f) && isperm(collect(f)), collect(components(ch.l))) function merge_eq(S::Sketch, J::StructACSet, eqclasses::Dict{Symbol, IntDisjointSets{Int}} ) function eq_dicts(eq::Dict{Symbol, IntDisjointSets{Int}})::Dict{Symbol, Dict{Int,Int}} res = Dict{Symbol, Dict{Int,Int}}() for (k, v) in pairs(eq) d = Dict{Int, Int}() for es in eq_sets(v) m = minimum(es) for e in es d[e] = m end end res[k] = d end return res end verbose = false J = deepcopy(J) # Initialize a function mapping values to their new (quotiented) value μ = eq_dicts(eqclasses) # Initialize a record of which values are to be deleted delob = DefaultDict{Symbol, Vector{Int}}(Vector{Int}) # Populate `delob` from `eqclasses` for (o, eq) in pairs(eqclasses) eqsets = eq_sets(eq; remove_singles=true) # Minimum element is the representative for vs in map(collect,collect(values(eqsets))) m = minimum(vs) vs_ = [v for v in vs if v!=m] append!(delob[o], collect(vs_)) end end # Replace all instances of a class with its representative in J # could be done in parallel for d in elabel(S) dsrc, dtgt = add_srctgt(d) μsrc, μtgt = μ[src(S, d)], μ[tgt(S, d)] isempty(μsrc) || set_subpart!(J, dsrc, replace(J[dsrc], μsrc...)) isempty(μtgt) || set_subpart!(J, dtgt, replace(J[dtgt], μtgt...)) end # Detect redundant duplicate relation rows for d in elabel(S) # could be done in parallel dsrc, dtgt = add_srctgt(d) seen = Set{Tuple{Int,Int}}() for (i, st) in enumerate(zip(J[dsrc], J[dtgt])) if st ∈ seen push!(delob[d], i) else push!(seen, st) end end end # Remove redundant duplicate relation rows for (o, vs) in collect(delob) isempty(vs) || rem_parts!(J, o, sort(vs)) end return J #μ end frozen_hom(S,J,h) = h ∈ J.aux.frozen[2] || any(v->h==add_id(v), vlabel(S)) # """Imperative approach to this.""" # function exec_change!(S::Sketch, J::StructACSet, # m::Dict{Symbol, Vector{Vector{Int}}}) # # values to be deleted # delob = DefaultDict{Symbol, Vector{Int}}(Vector{Int}) # for (k, vvs) in collect(m) # eqk, eqk_hom = add_equiv(k, true) # i = IntDisjointSets(nparts(J,eqk)) # for vs in filter(x->length(x)>1, vvs) # for (v1, v2) in zip(vs, vs[2:end]) # union!(i, J[v1, eqk_hom], J[v2, eqk_hom]) # end # end # # Populate `delob` from `eqclasses` # eqsets = eq_sets(i; remove_singles=true) # for vs_ in sort.(collect.(collect(values(eqsets)))) # v, vs, n = vs_[1], vs_[2:end], length(vs_)-1 # append!(delob[k], vs) # Minimum element is the rep # # delete equivalence class members that are not equal to the rep's eq.c. # del_eqcs = sort(filter(e->e!=J[v, eqk_hom], J[vs, eqk_hom])|>collect) # append!(delob[eqk], del_eqcs) # for e in vcat(add_tgt.(hom_in(S, k)), add_src.(hom_out(S, k))) # set_subpart!(J, vcat(incident(J, vs, e)...), e, fill(v, n)) # end # end # end # for (k, vs) in collect(delob) # rem_parts!(J, k, vs) # end # end """ Relation tables need not have duplicate entries with the same src/tgt. It is best to run this right after quotienting the equivalence classes. """ # function rem_dup_relations!(S::Sketch, J::StructACSet) # delob = DefaultDict{Symbol, Vector{Int}}(Vector{Int}) # # Detect redundant duplicate relation rows # for d in elabel(S) # could be done in parallelShe # dsrc, dtgt = add_srctgt(d) # seen = Set{Tuple{Int,Int}}() # for (i, st) in enumerate(zip(J[dsrc], J[dtgt])) # if st ∈ seen # push!(delob[d], i) # else # push!(seen, st) # end # end # end # # Remove redundant duplicate relation rows # for (o, vs) in collect(delob) # isempty(vs) || rem_parts!(J, o, sort(vs)) # end # end # union!(S::Sketch, J::StructACSet, tab::Symbol, i::Int, j::Int) = # union!(S,J,tab,[i,j]) # """ # Merge multiple elements of an *equivalence class* table. # """ # function union!(::Sketch, J::StructACSet, tab::Symbol, xs::Vector{Int}) # if length(xs) < 2 return false end # m = minimum(xs) # union_directed!(J, tab, m, [x for x in xs if x != m]) # return true # end # """ # Merge eqclass elements `i < xs` # Send everything that pointed to `xs` now to `i`. # """ # function union_directed!(J::StructACSet, tab::Symbol, i::Int, xs::Vector{Int}) # eq_tab, eq_hom = add_equiv(tab, true) # inc = vcat(incident(J, xs, eq_hom)...) # set_subpart!(J, inc, eq_hom, i) # rem_parts!(J, eq_tab, sort(xs)) # end # add_rel!(S::Sketch, J::StructACSet, f::Symbol, i::Int, j::Int) = # add_part!(J, f; Dict(zip(add_srctgt(f), [i,j]))...) end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
7287
using Catlab.Graphs # Equations ########### """ Path Eq cached state involves lots of subsets, each of which can be permuted, merged, or added to. """ function update_patheqs!(S::Sketch, J::SketchModel,f::CSetTransformation) μ = Dict(v=>[find_root!(J.aux.eqs[v],f[v](i)) for i in parts(dom(f), v)] for v in vlabel(S)) verbose = false ntriv(v) = nv(S.eqs[v]) > 1 if verbose println("updating path eqs w/ frozen obs $(J.aux.frozen[1]) \nold path eqs", J.aux.path_eqs[:I]) end new_peqs = EQ(map(vlabel(S)) do v if verbose && ntriv(v) println("\tv $v") end return v => map(parts(J.model, v)) do p preim = preimage(f[v], p) if verbose && ntriv(v) println("\t\tp $p preim $preim") end if length(preim) == 0 # we have a new element res = Union{Nothing,Vector{Int}}[vv ∈ J.aux.frozen[1] ? sort(collect(eq_reps(J.aux.eqs[vv]))) : nothing for vv in vlabel(S.eqs[v])] res[1] = [p] return res elseif length(preim) == 1 poss = J.aux.path_eqs[v][only(preim)] if verbose && ntriv(v) println("\t\tpreim=1 w/ corresponding poss $poss") end return map(zip(vlabel(S.eqs[v]), poss)) do (tab, tabposs) if isnothing(tabposs) if tab ∉ J.aux.frozen[1] return nothing else return parts(J.model, tab) |> collect end end new_elems = filter(x->isempty(preimage(f[tab],x)), parts(J.model,tab)) if verbose && ntriv(v) println("\t\t\tconsidering $tab w/ μ($tabposs)+$new_elems= $(unique(μ[tab][tabposs]) ∪ new_elems)") end return unique(μ[tab][tabposs]) ∪ new_elems end else # we've merged elements pos_res = map(preim) do pre poss = J.aux.path_eqs[v][pre] return map(zip(vlabel(S.eqs[v]), poss)) do (tab, tabposs) μ[tab][tabposs] end end res = [sort(unique(union(x...))) for x in zip(pos_res...)] return res end end end) J.aux.path_eqs = new_peqs end """ Use set of path equalities starting from the same vertex to possibly resolve some foreign key values. """ propagate_patheqs!(S::Sketch, J::SketchModel,f::CSetTransformation, c::Change) = vcat(Vector{Change}[propagate_patheq!(S, J, f, c, v) for v in vlabel(S)]...) """ If we add an element, this can add possibilities. If we add a relation, this can constrain the possible values. """ function propagate_patheq!(S::Sketch, J::SketchModel,f::CSetTransformation, c::Change, v::Symbol)::Vector{Change} if ne(S.eqs[v]) == 0 return Change[] end verbose = false res = Change[] to_check = Set{Symbol}() # ADDING OBJECTS for av in unique(vlabel(S.eqs[v])) if any(p->length(preimage(f[av], p)) != 1, parts(J.model, av)) push!(to_check, v) end end # Adding edges for (e, srctab, tgttab) in Set(elabel(S.eqs[v], true)) if nparts(codom(c.l), e) > 0 union!(to_check, [srctab, tgttab]) end end if verbose && !isempty(to_check) println("tables to check for updates: $v: $to_check") end return propagate_patheq!(S, J,f, v, to_check) end """ If f is now fully defined on *all* possibilities in the domain, then we can restrict the possibilities of the codomain to the image under f. If f is frozen, then we can pullback possibilities from a codomain and restrict possibilities in the domain to the preimage. If we discover, for any starting vertex, that there is an edge that has a singleton domain and codomain, then we can add that FK via an Addition. """ function propagate_patheq!(S::Sketch, J::SketchModel, m, v::Symbol, tabs::Set{Symbol}) res = Change[] G = S.eqs[v] fo, fh = J.aux.frozen verbose = 0 * (nv(S.eqs[v]) > 1 ? 1 : 0) if verbose > 1 println("prop patheq of $v (initial changed tabs: $tabs) w/ $(J.aux.path_eqs[v])") end while !isempty(tabs) tab = pop!(tabs) hs = union(Set.([hom_in(S, tab), hom_out(S, tab)])...) Gfks = [:elabel, :src, :tgt,[:src,:vlabel],[:tgt,:vlabel]] for tab_ind in findall(==(tab), vlabel(S.eqs[v])) if verbose > 1 println("changed tab $tab with tab ind $tab_ind") end # check all edges incident to the changed table for f_i in filter(e->G[e,:elabel] ∈ hs, edges(G)) f, s, t, Stab, Ttab = [G[f_i,x] for x in Gfks] f_s, f_t = add_srctgt(f) Seq,Teq = [J.aux.eqs[x] for x in [Stab,Ttab]] if verbose > 1 println("\tcheck out edge #$f_i ($f:$Stab#$s->$Ttab#$t)") end # Things we can infer if map has been completely determined already # and if obs are frozen if is_total(S,J,f) && [Stab,Ttab] ⊆ fo im_eqs = Set([find_root!(Teq, u) for u in unique(J.model[f_t])]) # every element in the image of f im = [p for p in parts(J.model, Ttab) if find_root!(Teq, p) ∈ im_eqs] # restrict possibilities of codom to image of f for poss in J.aux.path_eqs[v] if !isnothing(poss[t]) && poss[t] ⊈ im push!(tabs, Ttab) if verbose > 1 println("\treducing codom to $(poss[t])∩$im") end intersect!(poss[t], im) if isempty(poss[t]) throw(ModelException("Path eq imposs")) end end end # restrict possibilities of dom to preimage of possibilities for poss in filter(poss->!any(isnothing, poss[[t,s]]), J.aux.path_eqs[v]) preim_eqs = Set([find_root!(Seq,u) for u in J.model[vcat(incident(J.model, poss[t], f_t)...) , f_s]]) preim = [p for p in parts(J.model, Stab) if find_root!(Seq, p)∈preim_eqs] if poss[s] ⊈ preim if verbose > 1 println("restricting dom to $(poss[s])∩$preim") end push!(tabs, Stab) intersect!(poss[s], preim) if isempty(poss[s]) throw(ModelExcecption()) end end end end # Things that we can infer even if the map is not yet total # or if objects are not frozen. for (i, poss) in enumerate(J.aux.path_eqs[v]) if verbose > 1 println("\t\tconsidering poss from $tab#$i: $poss") end # we can set the fk for f of a certain element if !isnothing(poss[s]) && length(poss[s]) == 1 out = fk(S,J,f,only(poss[s])) # whether the fk is already set # fk is not set and there is only one possibility if isnothing(out) && !isnothing(poss[t]) && length(poss[t]) == 1 if verbose > 0 println("\t\t***ADDING checking $v#$tab_ind. $f:$(only(poss[s]))->$(only(poss[t]))***") end push!(res, add_fk(S,J,f,only(poss[s]),only(poss[t]))) # fk is set: we can reduce the possibilities of codom to one elseif !isnothing(out) && poss[t] != [out] # we can reduce the tgt to one thing if verbose > 1 println("\t\t\twe can infer $f for root $(only(poss[1])) from $(only(poss[s])) to $Ttab ($(poss[t]))") end if isnothing(poss[t]) || any(pt->in_same_set(J.aux.eqs[Ttab], out, pt), poss[t]) poss[t] = [out]; else throw(ModelException("Path eq impossibility")) end push!(tabs, Ttab) end end end end end end res end
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
4454
module Propagate export propagate! using ..Sketches, ..Models using Catlab.CategoricalAlgebra, Catlab.Theories using DataStructures using ..Models: EQ, fk_in, is_total, is_injective, eq_sets, merge_eq, is_surjective using ..Sketches: project include(joinpath(@__DIR__, "Functionality.jl")) include(joinpath(@__DIR__, "Cones.jl")) include(joinpath(@__DIR__, "Cocones.jl")) include(joinpath(@__DIR__, "PathEqs.jl")) """ Code to take a premodel and propagate information using (co)-limits and path equations, where "propagating" info means one of: - Adding a foreign key relation - Quotienting an equivalence class - Updating (co)cone data - Reducing possibilities for foreign key relations via path equalities The general interface of a propagator then is to take a change and return a list of changes that are implied. We decouple producing these changes from executing them for efficiency. """ function eq_reps(eq::IntDisjointSets)::Set{Int} Set([find_root!(eq, i) for i in 1:length(eq)]) end """ When we take a change and apply it, we need to update the (co)cone data and path equation possibilities in addition to checking for new merges/additions that need to be made. We may be propagating a change while there is already an addition queued. """ function propagate!(S::Sketch, J::SketchModel{Sc}, c::Change{Sc}, m::ACSetTransformation; queued=nothing) where Sc if isnothing(queued) queued = Addition(S,J) end verbose = false update_eqs!(J,m) if verbose println("\t\told frozen $(J.aux.frozen)") end update_frozen!(S,J,m,c,queued) if verbose println("\t\tnew frozen $(J.aux.frozen)") end update_patheqs!(S, J, m) update_cocones!(S, J, m, c) # update (co)cones patheqs and quotient by functionality updates = vcat( set_terminal(S,J), quotient_functions!(S,J,m,c), propagate_cones!(S,J,m,c), propagate_cocones!(S,J,m,c), propagate_patheqs!(S,J,m,c)) m_update = merge(S,J,Merge[u for u in updates if u isa Merge]) a_update = merge(S,J,Addition[u for u in updates if u isa Addition]) (m_update, a_update) end """ All maps into frozen objects of cardinality 1 are determined """ function set_terminal(S::Sketch,J::SketchModel) verbose = false res = Addition[] for v in J.aux.frozen[1] if nparts(J.model, v) == 1 # all maps into this obj must be 1 for e in hom_in(S,v) e_s, e_t = add_srctgt(e) for u in setdiff(parts(J.model, src(S,e)), J.model[e_s]) if verbose println("SET TERMINAL ADDING $e:#$u->1") end push!(res, add_fk(S,J,e,u,1)) end end end end return res end """ Modify union find structures given a model update """ function update_eqs!(J::SketchModel,m::ACSetTransformation) J.aux.eqs = Dict(map(collect(J.aux.eqs)) do (v, eq) new_eq = IntDisjointSets(nparts(J.model, v)) for eqset in collect.(eq_sets(eq; remove_singles=true)) eq1, eqrest... = eqset [union!(new_eq, m[v](eq1), m[v](i)) for i in eqrest] end return v=>new_eq end) end """ Update homs when their src/tgt are frozen and are fully identified. Update (co)limit objects when their diagrams are frozen. TODO: do this incrementally based on change data TODO this assumes that each object is the apex of at most one (co)cone. """ function update_frozen!(S::Sketch,J::SketchModel,m, ch::Change, queued::Addition) fobs, fhoms = J.aux.frozen chng = false is_iso(x) = is_injective(ch.l[x]) && is_surjective(ch.l[x]) is_isoq(x) = is_injective(queued.l[x]) && is_surjective(queued.l[x]) for e in elabel(S) if src(S,e) ∈ fobs && is_total(S,J,e) && e ∉ fhoms && is_isoq(e) push!(fhoms,e); chng |= true end end for c in S.cones if c.apex ∉ fobs && all(v->v∈fobs, vlabel(c.d)) && all(e->e∈fhoms, elabel(c.d)) && all( l->is_total(S,J,l), unique(last.(c.legs))) && is_iso(c.apex) && all(is_iso, vcat(vlabel(c), elabel(c))) if all(is_iso, [c.apex,last.(c.legs)...]) push!(fobs, c.apex); chng |= true end end end for (c,(cdata,cdict)) in zip(S.cocones,J.aux.cocones) if c.apex ∉ fobs && all(v->v∈fobs, vlabel(c.d)) && all(e->e∈fhoms, elabel(c.d)) && all( l->is_total(S,J,l), unique(last.(c.legs))) && is_iso(c.apex) push!(fobs, c.apex); chng |= true # do we need to check that cdict isn't missing something? end end J.aux.frozen = fobs => fhoms if chng update_frozen!(S,J,m,ch, queued) end end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
21615
module Sketches export Sketch, S0, LabeledGraph, Cone, hom_set, hom_in, hom_out, sketch_from_json, to_json, add_src, add_tgt, add_srctgt, dual, elabel, relsize, sizes, cone_leg,cone_legs, cocone_legs, cocone_leg, add_cone,add_cocone, add_path, labels, vlabel, add_pathrel, add_id, show_lg using Catlab.Present, Catlab.Graphs, Catlab.Theories, Catlab.CategoricalAlgebra using Catlab.Programs, Catlab.Graphics using Catlab.CategoricalAlgebra.CSetDataStructures: struct_acset import Catlab.Theories: dual import Catlab.Graphs: src, tgt, topological_sort, inneighbors, outneighbors import Catlab.CategoricalAlgebra: legs using CSetAutomorphisms using AutoHashEquals using JSON using DataStructures # DefaultDict, IntDisjointSets # Reflexive graphs ################## inneighbors(g::T, v::Int) where {T<:AbstractReflexiveGraph} = subpart(g, incident(g, v, :tgt), :src) outneighbors(g::T, v::Int) where {T<:AbstractReflexiveGraph} = subpart(g, incident(g, v, :src), :tgt) """A finitely presented category (with designated id edges)""" @present TheoryLabeledGraph <: SchReflexiveGraph begin Label::AttrType vlabel::Attr(V,Label) elabel::Attr(E,Label) end; @acset_type LabeledGraph_( TheoryLabeledGraph, index=[:src,:tgt,:vlabel,:elabel] ) <: AbstractReflexiveGraph const LabeledGraph = LabeledGraph_{Symbol} show_lg(x::LabeledGraph) = to_graphviz(x; node_labels=:vlabel, edge_labels=:elabel) add_id(x::Symbol) = Symbol("id_$x") function add_id!(G::LabeledGraph) for v in vertices(G) if G[v, :refl] == 0 e = add_edge!(G, v, v; elabel=add_id(G[v,:vlabel])) set_subpart!(G, v, :refl, e) end end return G end src(G::LabeledGraph, f::Symbol) = G[only(incident(G, f, :elabel)), :src] tgt(G::LabeledGraph, f::Symbol) = G[only(incident(G, f, :elabel)), :tgt] vlabel(G::LabeledGraph) = G[:vlabel] elabel(G::LabeledGraph) = G[non_id(G), :elabel] labels(G::LabeledGraph) = vcat(G[:vlabel], elabel(G)) add_src(x::Symbol) = Symbol("src_$(x)") add_tgt(x::Symbol) = Symbol("tgt_$(x)") add_srctgt(x::Symbol) = add_src(x) => add_tgt(x) add_cone(x::Int) = Symbol("Cone_$x") add_cocone(x::Int) = Symbol("Cocone_$x") add_path(n::Symbol, x::Symbol) = Symbol("$(n)_Path_$x") add_path(i::Int) = Symbol("P$i") add_pathrel(i::Int) = (Symbol("R_$i"), add_srctgt(Symbol("R_$i"))...) elabel(G::LabeledGraph, st::Bool) = collect(zip([G[non_id(G), x] for x in [:elabel, [:src,:vlabel], [:tgt,:vlabel]]]...)) non_id(G::LabeledGraph) = setdiff(edges(G), G[:refl]) |> collect |> sort # Cones #################################### """ Data of a cone (or a cocone) d - a diagram in the schema for the finite limit sketch (note this actually a graph homomorphism, so edges map to edges, not paths) apex - an object in the schema legs - a list of pairs, where the first element selects an object in the diagram and the second element picks a morphism in the schema """ @auto_hash_equals struct Cone d::LabeledGraph apex::Symbol legs::Vector{Pair{Int, Symbol}} ulegs::Vector{Symbol} leg_inds::Dict{Symbol,Vector{Int}} is_dag::Bool uwd::StructACSet function Cone(d::LabeledGraph, apex::Symbol, legs::Vector{Pair{Int, Symbol}}) length(Set(first.(legs))) == length(legs) || error("nonunique legs $legs") # check if vertex ordering is toposort. Do things differently if so? is_dag = all(e->src(d,e) <= tgt(d,e), edges(d)) ulegs = unique(last.(legs)) leg_inds = Dict(map(ulegs) do l l=>findall(==(l), last.(legs)) end) return new(add_id!(d), apex, legs, ulegs, leg_inds, is_dag, cone_query(d, legs)) end end Cone(s::Symbol) = Cone(LabeledGraph(), s, Pair{Int,Symbol}[]) legs(c::Cone) = c.legs vlabel(C::Cone) = vlabel(C.d) elabel(C::Cone) = elabel(C.d) cone_leg(c::Int, i::Int) = Symbol("$(add_cone(c))_$i") cone_legs(c::Int) = [cone_leg(c, i) for i in 1:nv(c.d)] cocone_legs(c::Cone) = [cocone_leg(c, i) for i in 1:length(c.legs)] cocone_leg(c::Int, i::Int) = Symbol("$(add_cocone(c))_$i") cocone_leg(c::Int, i::Int, st) = let s = cocone_leg(c, i); (s, add_srctgt(s)...) end cocone_apex(c::Int) = Symbol("$(add_cocone(c))_apex") elabel(C::Cone, st::Bool) = elabel(C.d, true) # Path equations ################ const DD = DefaultDict{Pair{Int,Int},Set{Vector{Int}}} """Enumerate all paths of an acyclic graph, indexed by src+tgt""" function enumerate_paths(G_::HasGraph; sorted::Union{AbstractVector{Int},Nothing}=nothing )::DD G = deepcopy(G_) rem_parts!(G, :E, findall(e->src(G_,e)==tgt(G_,e), edges(G_))) sorted = isnothing(sorted) ? topological_sort(G) : sorted Path = Vector{Int} paths = [Set{Path}() for _ in 1:nv(G)] # paths that start on a particular V for v in reverse(topological_sort(G)) push!(paths[v], Int[]) # add length 0 paths for e in incident(G, v, :src) push!(paths[v], [e]) # add length 1 paths for p in paths[G[e, :tgt]] # add length >1 paths push!(paths[v], vcat([e], p)) end end end # Restructure `paths` into a data structure indexed by start AND end V allpaths = DefaultDict{Pair{Int,Int},Set{Path}}(()->Set{Path}()) for (s, ps) in enumerate(paths) for p in ps push!(allpaths[s => isempty(p) ? s : G[p[end],:tgt]], p) end end return allpaths end """Add path to commutative diagram without repeating information""" function add_path!(schema::LabeledGraph, lg::LabeledGraph, p::Vector{Symbol}, all_p::Dict{Vector{Symbol}, Int}, eqp::Union{Nothing, Vector{Symbol}}=nothing, ) #all_p = isnothing(all_p) ? union(values(enumerate_paths(lg)...)) : all_p s = only(incident(schema, first(p), :elabel)) for i in 1:length(p) if !haskey(all_p, p[1:i]) e = only(incident(schema, p[i], :elabel)) t = schema[e, [:tgt,:vlabel]] if isnothing(eqp) || i < length(p) new_v = add_part!(lg, :V; vlabel=t) else new_v = all_p[eqp] end s = i == 1 ? 1 : all_p[p[1:i-1]] add_part!(lg, :E; src=s, tgt=new_v, elabel=p[i]) all_p[p[1:i]] = new_v end end end """ Get per-object diagrams encoding all commutative diagrams which start at that point, using the information of pairwise equations eqs:: Vector{Tuple{Symbol, Vector{Symbol}, Vector{Symbol}}} """ function eqs_to_diagrams(n::Symbol, schema::LabeledGraph, eqs) lgs = [LabeledGraph() for _ in 1:nv(schema)] all_ps = [Dict{Vector{Symbol}, Int}(Symbol[]=>1) for _ in 1:nv(schema)] for (i, root) in enumerate(schema[:vlabel]) add_part!(lgs[i], :V; vlabel=root) end for (p1, p2) in eqs # TODO: support more than 2 eqs at once src_i = schema[only(incident(schema, first(p1), [:elabel])), :src] if haskey(all_ps[src_i], p2) add_path!(schema, lgs[src_i], p1, all_ps[src_i], Vector{Symbol}(p2)) else add_path!(schema, lgs[src_i], p1, all_ps[src_i]) add_path!(schema, lgs[src_i], p2, all_ps[src_i], p1) end end return Dict(zip(vlabel(schema),lgs)) end function diagram_to_eqs(g::LabeledGraph) map(filter(x->length(x)>1, collect(values(enumerate_paths(g))))) do ps [g[p,:elabel] for p in ps] end end # Sketches ########## @present SchSingleton(FreeSchema) begin P1::Ob end @acset_type Singleton(SchSingleton) """ A finite-limit, finite-colimit sketch. Auto-generates data types for C-sets (representing models, i.e. functors from the schema to Set) and C-rels (for representing premodels, which may not satisfy equations/(co)limit constraints) """ @auto_hash_equals struct Sketch name::Symbol schema::LabeledGraph cones::Vector{Cone} cocones::Vector{Cone} eqs::Dict{Symbol, LabeledGraph} cset::Type crel::Type function Sketch(name::Symbol, schema::LabeledGraph; cones=Cone[], cocones=Cone[], eqs=Vector{Symbol}[]) where V<:AbstractVector add_id!(schema) namechars = join(vcat(schema[:vlabel], schema[:elabel])) e = "BAD SYMBOL in $schema" all([!occursin(x, namechars) for x in [",", "|"]]) || error(e) if isempty(eqs) eqds = Dict(map(vlabel(schema)) do v d = LabeledGraph(); add_part!(d,:V; vlabel=v) v => d end) else if (first(eqs) isa AbstractVector) [check_eq(schema, p,q) for (p, q) in eqs] eqds = eqs_to_diagrams(name,schema, eqs) else length(eqs) == nv(schema) || error("Bad eq input $eqs") eqds = Dict(zip(vlabel(schema),eqs)) end end all(gr->all(==(0),refl(gr)), values(eqds)) || error("refl in eq graph") [check_cone(schema, c) for c in cones] [check_cocone(schema, c) for c in cocones] cset_type = grph_to_cset(name, schema) crel_type = grph_to_crel(name, schema) return new(name, schema, cones, cocones, eqds, cset_type, crel_type) end end vlabel(S::Sketch) = vlabel(S.schema) elabel(S::Sketch) = elabel(S.schema) elabel(S::Sketch, st::Bool) = elabel(S.schema, true) labels(S::Sketch) = vcat(S.schema[:vlabel], elabel(S)) non_id(S::Sketch) = non_id(S.schema) """Convert a presentation of a schema (as a labeled graph) into a C-Set type""" function grph_to_cset(name::Symbol, sketch::LabeledGraph)::Type pres = Presentation(FreeSchema) xobs = [Ob(FreeSchema, s) for s in sketch[:vlabel]] for x in xobs add_generator!(pres, x) end getob(s::Symbol) = let ss=pres.generators[:Ob]; ss[findfirst(==(s), Symbol.(string.(ss)))] end for (e, src, tgt) in elabel(sketch, true) add_generator!(pres, Hom(e, getob(src), getob(tgt))) end expr = struct_acset(name, StructACSet, pres, index=elabel(sketch)) eval(expr) return eval(name) end """ Get a C-Set type that can store the information of premodels This includes a *relation* for each morphism (not a function) As well as data related to (co)cones, path eqs, and equivalence classes. For each object, we need another object which is the quotiented object via an equivalence relation. For each cocone, we need a *relation* for each diagram object for each element in the apex object. For each set of path equations (indexed by start object), we need (for each element in the start object) a *relation* for each object in the commutative diagram, signaling which values are possibly in the path. """ function grph_to_crel(name::Symbol,sketch::LabeledGraph; cones=Cone[], cocones=Cone[], path_eqs=LabeledGraph[] )::Type name′ = Symbol("rel_$name") pres = Presentation(FreeSchema) getob(s::Symbol) = let ss=pres.generators[:Ob]; ss[findfirst(==(s), Symbol.(string.(ss)))] end # add objects [add_generator!(pres, Ob(FreeSchema, v)) for v in sketch[:vlabel]] # add morphisms as relations for (e, src_, tgt_) in elabel(sketch, true) s, t = add_srctgt(e) g = add_generator!(pres, Ob(FreeSchema, e)) add_generator!(pres, Hom(s, g, getob(src_))) add_generator!(pres, Hom(t, g, getob(tgt_))) end expr = struct_acset(name′, StructACSet, pres, index=Symbol.(string.(pres.generators[:Hom]))) eval(expr) return eval(name′) end """Validate path eq""" function check_eq(schema::LabeledGraph, p::Vector,q::Vector)::Nothing # Get sequence of edge numbers in the schema graph pe, qe = [[only(incident(schema, edge, :elabel)) for edge in x] for x in [p,q]] ps, qs = [isempty(x) ? nothing : schema[:src][x[1]] for x in [pe, qe]] isempty(qe) || ps == qs || error( "path eq don't share start point \n\t$p ($ps) \n\t$q ($qs)") pen, qen = [isempty(x) ? nothing : schema[:tgt][x[end]] for x in [pe,qe]] isempty(qe) || pen == qen || error( "path eq don't share end point \n\t$p ($pen) \n\t$q ($qen)") !isempty(qe) || ps == pen|| error( "path eq has self loop but p doesn't have same start/end $p \n$q") all([schema[:tgt][p1]==schema[:src][p2] for (p1, p2) in zip(pe, pe[2:end])]) || error( "head/tail mismatch in p $p \n$q") all([schema[:tgt][q1]==schema[:src][q2] for (q1, q2) in zip(qe, qe[2:end])]) || error( "head/tail mismatch in q $p \n$q") return nothing end """Validate cone data""" function check_cone(schema::LabeledGraph, c::Cone)::Nothing vert = only(incident(schema, c.apex, :vlabel)) for (v, l) in c.legs edge = only(incident(schema, l, :elabel)) schema[:src][edge] == vert || error("Leg does not come from apex $c") schema[:vlabel][schema[:tgt][edge]] == c.d[:vlabel][v] || error( "Leg $l -> $v does not go to correct vertex $c") is_homomorphic(c.d, schema) || error( "Cone diagram does not map into schema $c") end end """Validate cocone data""" function check_cocone(schema::LabeledGraph, c::Cone)::Nothing vert = only(incident(schema, c.apex, :vlabel)) for (v, l) in c.legs edge = only(incident(schema, l, :elabel)) schema[:tgt][edge] == vert || error( "Leg $l does not go to apex $(c.apex)") schema[:vlabel][schema[:src][edge]] == c.d[:vlabel][v] || error( "Leg $l -> $v does not go to correct vertex $c") is_homomorphic(c.d, schema) || error( "Cone diagram $(c.d) \ndoes not map into schema\n $schema") end end const S0=Sketch(:dummy, LabeledGraph()) # placeholder sketch function project(S::Sketch, crel::StructACSet, c::Cone) crel = deepcopy(crel) for v in setdiff(vlabel(S),vlabel(c)) ∪ setdiff(elabel(S),elabel(c)) rem_parts!(crel, v, parts(crel, v)) end return crel end # Don't yet know if this stuff will be used ########################################## """List of arrows between two sets of vertices""" function hom_set(S::Sketch, d_symbs, cd_symbs)::Vector{Symbol} symbs = [d_symbs, cd_symbs] d_i, cd_i = [vcat(incident(S.schema, x, :vlabel)...) for x in symbs] e_i = setdiff( (vcat(incident(S.schema, d_i, :src)...) ∩ vcat(incident(S.schema, cd_i, :tgt)...)), refl(S.schema) ) return S.schema[e_i, :elabel] end hom_in(S::Sketch, t::Symbol) = hom_set(S, S.schema[:vlabel], [t]) hom_out(S::Sketch, t::Symbol) = hom_set(S, [t], S.schema[:vlabel]) hom_in(S::Sketch, t::Vector{Symbol}) = vcat([hom_in(S,x) for x in t]...) hom_out(S::Sketch, t::Vector{Symbol}) = vcat([hom_out(S,x) for x in t]...) """Dual sketch. Optionally rename obs/morphisms and the sketch itself""" function dual(s::Sketch, n::Symbol=Symbol(), obs::Vector{Pair{Symbol, Symbol}}=Pair{Symbol, Symbol}[]) d = Dict(obs) eqsub = ps -> reverse([get(d, p, p) for p in ps]) dname = isempty(string(n)) ? Symbol("$(s.name)"*"_dual") : n dschema = dualgraph(s.schema, d) dcones = [dual(c, d) for c in s.cocones] dccones = [dual(c,d) for c in s.cones] eqs = vcat(diagram_to_eqs.(values(s.eqs))...) deqs = [[eqsub(p) for p in ps] for ps in eqs] Sketch(dname, dschema, cones=dcones, cocones=dccones,eqs=deqs) end dual(c::Cone, obs::Dict{Symbol, Symbol}) = Cone(dual(dualgraph(c.d, obs)), get(obs,c.apex,c.apex), [(nv(c.d)-i+1 => get(obs, x, x)) for (i, x) in c.legs]) """Reverse vertex indices""" function dual(lg::LabeledGraph) G = deepcopy(lg) n = nv(lg)+1 set_subpart!(G, :vlabel,reverse(lg[:vlabel])) set_subpart!(G, :refl,reverse(lg[:refl])) [set_subpart!(G, y, [n-x for x in lg[y]]) for y in [:src,:tgt]] return G end """Flip edge directions. Optionally rename symbols""" function dualgraph(lg::LabeledGraph, obd::Dict{Symbol, Symbol}) g = deepcopy(lg) # reverse vertex order set_subpart!(g, :src, lg[:tgt]) set_subpart!(g, :tgt, lg[:src]) set_subpart!(g, :vlabel, replace(z->get(obd, z, z), g[:vlabel])) set_subpart!(g, :elabel, replace(z->get(obd, z, z), g[:elabel])) return g end src(S::Sketch, e::Symbol) = S.schema[:vlabel][S.schema[:src][ only(incident(S.schema, e, :elabel))]] tgt(S::Sketch, e::Symbol) = S.schema[:vlabel][S.schema[:tgt][ only(incident(S.schema, e, :elabel))]] cone_to_dict(c::Cone) = Dict([ "d"=>generate_json_acset(c.d), "apex"=>string(c.apex),"legs"=>c.legs]) dict_to_cone(d::Dict)::Cone = Cone( parse_json_acset(LabeledGraph,d["d"]), Symbol(d["apex"]), Pair{Int,Symbol}[parse(Int, k)=>Symbol(v) for (k, v) in map(only, d["legs"])]) """TO DO: add cone and eq info to the hash...prob requires CSet for Sketch""" Base.hash(S::Sketch) = call_nauty(to_graph(S.schema)) to_json(S::Sketch) = JSON.json(Dict([ :name=>S.name, :schema=>generate_json_acset(S.schema), :cones => [cone_to_dict(c) for c in S.cones], :cocones => [cone_to_dict(c) for c in S.cocones], :eqs => [generate_json_acset(S.eqs[v]) for v in vlabel(S)]])) function sketch_from_json(s::String)::Sketch p = JSON.parse(s) Sketch(Symbol(p["name"]), parse_json_acset(LabeledGraph, p["schema"]), cones=[dict_to_cone(d) for d in p["cones"]], cocones=[dict_to_cone(d) for d in p["cocones"]], eqs=[parse_json_acset(LabeledGraph,e) for e in p["eqs"]]) end function relsize(S::Sketch, I::StructACSet)::Int return sum([nparts(I, x) for x in S.schema[:vlabel]]) end """Pretty print the sizes of objects in a (pre)model""" function sizes(S::Sketch, I::StructACSet; )::String join(["$o: $(nparts(I, o))" for o in S.schema[:vlabel]],", ") end function sizes(::Sketch, I::StructACSet{S}, more::Bool)::String where {S} join(["$o: $(nparts(I, o))" for o in ob(S)], ", ") end """ Query that returns all instances of the base pattern. External variables are labeled by the legs of the cone. If the apex of the cone has multiple legs with the same morphism, then by functionality the junctions they point to must be merged, which we enforce. However, this assumes the model is valid. For instance, the monomorphism cone constraint would never detect that a morphism isn't mono if we perform this optimization, so perhaps we should never do this? Maybe we just use the optimized query depending on whether certain tables/fks are frozen. """ function cone_query(d::LabeledGraph, legs; optimize=false)::StructACSet verbose = false vars = [Symbol("x$i") for i in nparts(d, :V)] typs = ["$x(_id=x$i)" for (i, x) in enumerate(d[:vlabel])] bodstr = vcat(["begin"], typs) for (i,e) in filter(x->x[1]∉refl(d), collect(enumerate(d[:elabel]))) s=src(d, i); t=tgt(d,i); push!(bodstr, "$e(src_$e=x$s, tgt_$e=x$t)") end push!(bodstr, "end") exstr = "($(join(["$(v)_$i=x$k" for vs in values(vars) for (i, (k,v)) in enumerate(legs)],",") ))" ctxstr = "($(join(vcat(["x$i::$x" for (i, x) in enumerate(d[:vlabel])],),",")))" ex = Meta.parse(exstr) ctx = Meta.parse(ctxstr) hed = Expr(:where, ex, ctx) bod = Meta.parse(join(bodstr, "\n")) if verbose println("ex $exstr\n ctx $ctxstr\n bod $(join(bodstr, "\n"))") end res = parse_relation_diagram(hed, bod) if optimize # Merge junctions which μl = [minimum(findall(==(l), last.(legs))) for l in last.(legs)] μj = vcat(μl, length(legs)+1 : nparts(res,:Junction)) μb = vcat(μl, length(legs)+1 : nparts(res,:Box)) μp = vcat(μl, length(legs)+1 : nparts(res,:Port)) for j in [:junction, :outer_junction] set_subpart!(res,j,μj[res[j]]) end set_subpart!(res, :box, μb[res[:box]]) res2 = typeof(res)() # There is a bug: cannot delete from a UWD w/o an error copy_parts!(res2, res; Junction=[i for i in parts(res, :Junction) if i ∈ μj], Box=[i for i in parts(res, :Box) if i ∈ μb], Port=[i for i in parts(res, :Port) if i ∈ μp], OuterPort=parts(res,:OuterPort)) return res2 else return res end end cone_query(c::Cone) = cone_query(c.d, c.legs) # function cocone_to_cset(n::Symbol, schema::LabeledGraph, c::Cone, i::Int) # name = Symbol("$(n)_cocone_$i") # pres = Presentation(FreeSchema) # obs = Dict([v=>add_generator!(pres, Ob(FreeSchema, Symbol("$v"))) # for v in Set(c.d[:vlabel])]) # ap = add_generator!(pres, Ob(FreeSchema, :apex)) # for (j,k) in enumerate(c.d[:vlabel]) # cc, ccs, cct = cocone_leg(i, j, true) # grel = add_generator!(pres, Ob(FreeSchema, cc)) # add_generator!(pres, Hom(ccs, grel, ap)) # add_generator!(pres, Hom(cct, grel, obs[k])) # end # expr = struct_acset(name, StructACSet, pres, index=pres.generators[:Hom]) # eval(expr) # return eval(name) # end # function eq_to_type(n::Symbol, p::LabeledGraph, x::Symbol) # name = add_path(n,x) # pres = Presentation(FreeSchema) # obs = [add_generator!(pres, Ob(FreeSchema, add_path(i))) # for (i, v) in enumerate(p[:vlabel])] # for (i,k) in collect(enumerate(p[:vlabel])) # pob, psrc, ptgt = add_pathrel(i) # gob = add_generator!(pres, Ob(FreeSchema, pob)) # add_generator!(pres, Hom(psrc, gob, obs[1])) # add_generator!(pres, Hom(ptgt, gob, obs[i])) # end # expr = struct_acset(name, StructACSet, pres, index=pres.generators[:Hom]) # eval(expr) # return eval(name) # end """ For each cone, we have a cone object which keeps track of, for each element in the apex object, which tuple of elements in the diagram objects are matched. We only need one table for each distinct *type* of object in the diagram, not one for each vertex in the diagram. """ # function cone_to_cset(n::Symbol, schema::LabeledGraph, c::Cone, i::Int) # name = Symbol("$(n)_cone_$i") # pres = Presentation(FreeSchema) # obs = Dict([v=>add_generator!(pres, Ob(FreeSchema, Symbol("$v"))) # for v in Set(c.d[:vlabel])]) # ap = add_generator!(pres, Ob(FreeSchema, :apex)) # for (j,k) in enumerate(c.d[:vlabel]) # add_generator!(pres, Hom(cone_leg(i, j), ap, obs[k])) # end # expr = struct_acset(name, StructACSet, pres, index=pres.generators[:Hom]) # eval(expr) # return eval(name) # end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
16575
module Limits export LoneCones, compute_cones!, compute_cocones!, query_cone using ..Sketches using ..Models using Catlab.CategoricalAlgebra using DataStructures const LoneCones = Dict{Symbol,Set{Int}} """ Add cone apex objects based on conjunctive queries. For example: a cone object D over a cospan B -> A <- C (i.e. a pullback) Imagine all sets have two elements. If B maps both elements to a₁ and C ↪ A, then a conjunctive query looking for instances of the diagram should return: QueryRes A B C ---------------- 1 a₁ b₁ c₁ 2 a₁ b₂ c₁ Because the functions are partial in the premodel, there may be limit objects that will be discovered to exist (by merging elements or adding new connections) So the query result is a *lower* bound on the number of elements in the apex. This means we expect there to be two objects in the limit object D. If an element already exists with the same legs, then we are good. If an element that disagrees with one of the legs exists, then we need to add a new element. If is an element with information that partially matches a query result, we still add a new element but note that these two may be merged at a later point. """ function compute_cones!(S::Sketch, J::StructACSet, eq::EqClass, #ns::NewStuff, d::Defined)::Tuple{Bool,Bool} changed = false for c in filter(c -> c.apex ∉ d[1], S.cones) cchanged, cfail = compute_cone!(S, J, c, eq, d) changed |= cchanged if cfail return (changed, true, m) end end return changed, false end """ Modifies `m`, `eq`, and `d` Check/enforce the following cone properties in this order: 1. Two cone apex elements are equal if their corresponding leg values match. 2. Same as (1), but in the other direction. 3. For every pattern match of the cone's diagram in J, there is a cone element. If we do this process and all objects/arrows in the cone's diagram are defined, then the limit object itself is fully defined (cannot change in cardinality). (1)/(2) update `eq`, whereas information from (3) is added to the `Modify` Dream: we can somehow only query 'newly added' information Returns whether it changed anything and whether it failed. """ function compute_cone!(S::Sketch, J::StructACSet, cone_::Cone, eq::EqClass, d::Defined)::Pair{Bool, Bool} cone_.apex ∉ d[1] || error("don't compute cone for something defined! $J $d") verbose, changed = false, false cchange, cfail, cones = cone_eqs!(S, J, cone_, eq, d) changed |= cchange if cfail return changed => true end # look for instances of the pattern query_results = query_cone(S, J, cone_, eq) #query(J, cone_query(cone_)) for res in query_results length(res) == length(cone_.legs) || error("Bad res $res from query") resv = Vector{Int}(collect(res)) # For any new diagram matches that we do not have an explicit apex elem for: if !haskey(cones, resv) ne = add_part!(J, cone_.apex) for (f, v) in zip(last.(cone_.legs), resv) add_rel!(S, J, d, f, ne, v) end changed=true # If we don't already have a NewElem with these legs... # new_elms = collect(values(ns.ns[cone_.apex])) # if verbose # println("Checking if res $res is in existing ns $(new_elms)") # end # if !any([all([in_same_set(eq[tgt(S,f)], ne.map_out[f], c.map_out[f]) # for f in last.(cone_.legs)]) for c in new_elms]) # changed = true # if verbose println("NEW APEX $res") end # ns.ns[cone_.apex][resv] = ne # end else # anything to do? end end return changed => false end """ Look for instances of a cone's diagram in a premodel """ function query_cone(S::Sketch, J::StructACSet, c::Cone, eq::EqClass, )::Vector{Vector{Int}} res = [[]] verbose = false for (i, tab) in enumerate(c.d[:vlabel]) if verbose println("i $i tab $tab\n\tres $res") end new_res = Vector{Int}[] if isempty(res) return Vector{Int}[] end # we could product our options with this set, but let's filter now eqs = eq_reps(eq[tab]) # We can immediately filter possible values based on self-edges in diagram for self_e in incident(c.d, i, :tgt) ∩ incident(c.d, i, :src) self_e_name = c.d[self_e, :elabel] eqs = filter(x -> has_map(S, J, self_e_name, x, x, eq), eqs) end # any edges w/ tables we've seen so far in/out of current one constrain us es_in = filter(e->c.d[e, :src] < i, incident(c.d, i, :tgt)) es_out =filter(e->c.d[e, :tgt] < i, incident(c.d, i, :src)) for old_res in res for new_val in eqs fail = false for e in es_in e_name, e_src = [c.d[e, x] for x in [:elabel, :src]] if !has_map(S, J, e_name, old_res[e_src], new_val, eq) if verbose println("No match: or $old_res, nv $new_val, e $e") end fail = true break end end for e in (fail ? [] : es_out) e_name, e_tgt = [c.d[e, x] for x in [:elabel, :tgt]] if !has_map(S, J, e_name, new_val, old_res[e_tgt], eq) if verbose println("No match: or $old_res, nv $new_val, e $e") end fail = true break end end if !fail push!(new_res, vcat(old_res, [new_val])) end end end res = new_res end return unique([[subres[i] for i in first.(c.legs)] for subres in res]) end """ Start with equivalence classes of apex elements. Make the corresponding leg elements equal. Use the equivalences of cone apex elements to induce other equivalences. Return whether the resulting model is *un*satisfiable (if certain merging is forbidden). Modifies `eq` and `w`. """ function cone_eqs!(S::Sketch, J::StructACSet, c::Cone, eq::EqClass, d::Defined, )::Tuple{Bool, Bool, Dict{Vector{Int}, Int}} changed, verbose = false, false eqclasses_legs = Vector{Int}[] apex_elems = eq_reps(eq[c.apex]) legnames = last.(c.legs) legtabs = [tgt(S, leg) for leg in legnames] for eqs in apex_elems eqclass_legs = Int[] for (tab, leg) in zip(legtabs, legnames) s, t = add_srctgt(leg) legvals = Set(vcat(J[incident(J, eqs, s), t]...)) if length(legvals) == 1 [add_rel!(S, J, d, leg, e, only(legvals)) for e in eqs] push!(eqclass_legs, only(legvals)) elseif isempty(legvals) push!(eqclass_legs, 0) elseif length(legvals) > 1 # all the elements in this leg are equal if tab ∈ d[1] return (changed, true, Dict{Vector{Int}, Int}()) else for (lv1, lv2) in Iterators.product(legvals, legvals) if !in_same_set(eq[tab], lv1, lv2) changed = true union!(eq[tab], lv1, lv2) end end push!(eqclass_legs, first(legvals)) end end end push!(eqclasses_legs, eqclass_legs) end # Now quotient the apex elements if they have the same legs res = Dict{Vector{Int}, Int}() for (eqc, eqcl) in zip(apex_elems, eqclasses_legs) if minimum(eqcl) > 0 lv = [find_root!(eq[tab], v) for (tab, v) in zip(legtabs, eqcl)] if haskey(res, lv) if !in_same_set(eq[c.apex], eqc, res[lv]) if verbose println("$(c.apex) elements have the same legs: $eqc $(res[lv])") end changed = true union!(eq[c.apex], eqc, res[lv]) end else res[lv] = eqc end end end return (changed, false, res) end # Colimits ########## """ Add cocone apex objects based on the cocone diagram. Modifies `eq` For example: a cocone object D over a span B <- A -> C (i.e. a pushout) We currently assume all functions within the diagram are defined, and then reason about what the data of the legs of the cocone should be and whether or not elements in the apex should be merged. Future work might involve reasoning even when the functions in the diagram are only partially defined. We start assuming there is a cocone element for |A|+|B|+|C| and then quotient by each arrow in the diagram. Assume each set has two elements, so we initially suppose D = |6|. Let A->B map both elements to b₁, while A↪C. D legA legB legC ------------------ a₁ {a₁} ? ? a₂ {a₂} ? ? b₁ ? {b₁} ? b₂ ? {b₂} ? c₁ ? ? {c₁} c₂ ? ? {c₂} Take the map component that sends a₁ to b₁. The result will be D legA legB legC ------------------- a₁b₁ {a₁} {b₁} ? a₂ {a₂} ? ? b₂ ? {b₂} ? c₁ ? ? {c₁} c₂ ? ? {c₂} After all the quotienting (assuming all map components are defined), we get D legA legB legC ------------------------------ a₁a₂b₁c₁c₂ {a₁a₂} {b₁} {c₁c₂} b₂ ? {b₂} ? If certain pieces of data are not yet defined (e.g. a₁↦c₁), we may have overestimated the size of the cocone, but when we later learn that map information, then the elements in D (a₁c₂ and a₂b₁c₂) will be merged together. So we get an upper bound on the number of elements of the apex object. This data combines with existing elements in D and any leg data (A->D,B->D,C->D) that may exist. Every table in the diagram potentially has a leg into the apex. We consider all possibilities: 0.) None of the legs are defined (then: add a fresh element to D & set the legs) 1.) All legs within a given group map to the same element in D (nothing to do) 2.) Same as above, except some are undefined (then: set the undefined ones to the known value from the other ones) 3.) Distinct values of the apex are mapped to by legs. The only way for this to be consistent is if we merge the values in the apex. There are also possibilities to consider from the apex side: 1.) An apex element has a diagram group associated with it (nothing to do) 2.) An apex element has NO group assigned to it (we need to consider all the possibilities for a new element (in one of the legs) mapping to it. This makes cocones different from cones: for cones, an unknown cone element will have its possibilities resolved by ordinary branching on possible FK values, whereas with cocones we have add a distinctive kind of branching. 3.) Multiple groups may be assigned to the SAME apex element. This cannot be fixed in general by merging/addition in the way that limit sketch models can be. Thus we need a way to fail completely (given by the `nothing` option). """ function compute_cocones!(S::Sketch, J::StructACSet, eq::EqClass, d::Defined)::Tuple{Bool,Bool,LoneCones} changed, lone_cone = false, LoneCones() for c in filter(c->c.apex ∉ d[1], S.cocones) cchanged, cfailed, res = compute_cocone!(S, J, c, eq, d) if cfailed return (changed, true, lone_cone) end changed |= cchanged lone_cone[c.apex] = res # assumes there aren't multiple cones on same vert end return (changed, false, lone_cone) end """ Unlike cones, where knowing partial maps can give you matches, we require all maps in a cocone diagram to be completely known in order to determine cocone elements. Updates `m` and `eq` and `d` """ function compute_cocone!(S::Sketch, J::StructACSet, co_cone::Cone, eqc::EqClass, d::Defined)::Tuple{Bool,Bool, Set{Int}} co_cone.apex ∉ d[1] || error("Don't compute cocone that's defined $J $d") println("computing cocone $(co_cone.apex) for ") show(stdout, "text/plain", crel_to_cset(S,J)[1]) verbose, changed = true, false # Get all objects and morphisms (+ their src/tgt) in the diagram diag_objs = co_cone.d[:vlabel] if ne(co_cone.d) < nv(co_cone.d) diag_homs = Tuple{Symbol,Int,Int}[] else diag_homs = collect(zip([co_cone.d[non_id(co_cone.d), x] for x in [:elabel, :src,:tgt]]...)) end # Get *unquotiented* apex: i.e. all distinct elements for each table involved apex_obs = vcat([[(i, v) for v in eq_reps(eqc[obj])] for (i, obj) in enumerate(diag_objs)]...) # Get the apex objs index from the (tab_ind, elem_ind) value itself apex_ob_dict = Dict([(j,i) for (i,j) in enumerate(apex_obs)]) # Equivalence class of `apex_obs` eq_elems = IntDisjointSets(length(apex_obs)) # Check if all homs in the diagram are total before moving on if any(e->!is_total(S,J,e,eqc), elabel(co_cone)) return changed, false, Set{Int}() end # Use C-set map data to quotient this if verbose println("diag homs $diag_homs") end for (e, i, j) in diag_homs esrc, etgt = add_srctgt(e) stab, ttab = src(S, e), tgt(S,e) for (x_src, x_tgt) in zip([J[x] for x in [esrc, etgt]]...) is, it = find_root!(eqc[stab], x_src), find_root!(eqc[ttab], x_tgt) s, t = [apex_ob_dict[v] for v in [(i, is), (j, it)]] union!(eq_elems, s, t) end end # Reorganize equivalence class data into a set of sets. eqsets = eq_sets(eq_elems; remove_singles=false) println("eqsets $eqsets ") # Determine what apex element(s) each eq class corresponds to, if any apex_tgt_dict = Dict() for eqset in eqsets # ignore eqsets that have no leg/apex values eqset_vals = [apex_obs[i] for i in eqset] eqset_tabs = first.(eqset_vals) if isempty(eqset_tabs ∩ vcat([co_cone.apex],first.(co_cone.legs))) # println("Eqset disconnected from apex, ignoring $eqset_vals") apex_tgt_dict[eqset] = Int[] continue end # sanity check: no table appears more than once in an eq class for i in Set(first.(eqset_vals)) length(filter(==(i), first.(eqset_vals))) <= 1 || "eqset_vals $eqset_vals" end if verbose println("eqset_vals $eqset_vals") end # Now that we know this eqset either maps to an apex element or it is in the # table of a leg (so we should create a new apex element) leg_vals = Tuple{Symbol,Int,Int}[] for (leg_ind, leg_name) in co_cone.legs ind_vals = collect(last.(filter(v->v[1]==leg_ind, eqset_vals))) println("ind vals $ind_vals") if length(ind_vals) == 1 ind_val = only(ind_vals) l_src, l_tgt = add_srctgt(leg_name) a_tgt = J[incident(J, ind_val, l_src), l_tgt] if !isempty(a_tgt) println("$(incident(J, ind_val, l_src)) a tgt $a_tgt") ap = find_root!(eqc[co_cone.apex], first(a_tgt)) push!(leg_vals, (leg_name, ind_val, ap)) end end end apex_tgts = apex_tgt_dict[eqset] = collect(Set(last.(leg_vals))) # Handle things different depending on how many apex_tgts the group has if length(apex_tgts) == 0 # we have a new element to add ne = NewElem() # We have a cocone object with nothing mapping into it. Need to branch. elseif length(apex_tgts) == 1 # eqset is consistent apex_rep = only(apex_tgts) else # eqset is consistent only if we merge apex vals apex_rep = minimum(apex_tgts) if length(apex_tgts) > 1 # we have to merge elements of apex unions!(eqc[co_cone.apex], collect(apex_tgts)) if verbose println("computing cocone $(co_cone.apex) unioned indices $apex_tgts") end end end # Update `src(leg)` (index # `l_val`) to have map to `apex_rep` via `leg` for (leg_ind, leg_name) in co_cone.legs for ind_val in collect(last.(filter(v->v[1]==leg_ind, eqset_vals))) if length(apex_tgts)==0 #println("Cocone added $leg_name: $ind_val -> $apex_rep (fresh)") push!(ne.map_in[leg_name], ind_val) elseif !has_map(S, J, leg_name, ind_val, apex_rep, eqc) if verbose println("Cocone added $leg_name: $ind_val -> $apex_rep") end changed = true add_rel!(S, J, d, leg_name, ind_val, apex_rep) end end end if length(apex_tgts)==0 new_ind = add_part!(J, co_cone.apex) for (k, vs) in ne.map_in for v in vs add_rel!(S, J, d, k, v, new_ind) end end changed = true end end # Fail if necessarily distinct groups map to the same apex element eqset_pairs = collect(Iterators.product(eqsets, eqsets)) for es in filter(x->x[1]!=x[2], eqset_pairs) tgts1, tgts2 = [apex_tgt_dict[e] for e in es] conflict = intersect(tgts1, tgts2) if !isempty(conflict) return changed, true, Set{Int}() end end seen_apex_tgts = (isempty(apex_tgt_dict) ? Set{Int}() : union(values(apex_tgt_dict)...)) return changed, false, Set( collect(setdiff(parts(J, co_cone.apex), seen_apex_tgts))) end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
21496
module ModEnum export chase_step, chase_step_db, chase_set, sat_eqs, path_eqs!, prop_path_eq_info!, chase_below using ..Sketches using ..DB using ..Models using ..Limits using Catlab.WiringDiagrams, Catlab.CategoricalAlgebra using Catlab.Programs.RelationalPrograms: parse_relation_diagram using Catlab.Graphs: refl using Combinatorics, DataStructures, Distributed using LibPQ, Tables """ parallelize by adding Threads.@threads before a for loop. Hard to do w/o creating bugs. """ # Type synonyms ############### const Poss = Tuple{Symbol, Int, Modify} struct Branch branch::Symbol # either a morphism or a cocone apex val::Int # index of the src element index or the cocone poss::Vector{Poss} # Modifications: possible ways of branching end const b_success = Branch(Symbol(),0,[]) # Toplevel functions #################### """ Take a sketch and a premodel and perform one chase step. 1. Build up equivalence classes using path equalities 2. Compute cones and cocones 3. Consider all TGDs (foreign keys that point to nowhere). - Pick one and return the possible decisions for branching on it """ function chase_step(S::Sketch, J::StructACSet, d::Defined )::Union{Nothing,Tuple{StructACSet, Defined, Branch}} # Initialize variables verbose = false fail, J = handle_zero_one(S, J, d) # doesn't modify J if fail return nothing end ns, lc = NewStuff(), LoneCones() # this loop might not be necessary. If one pass is basically all that's # needed, then this loop forces us to run 2x loops for cnt in Iterators.countfrom() if verbose && cnt > 1 println("\tchase step iter #$cnt") end if cnt > 10 error("TOO MANY ITERATIONS") end changed, failed, J, lc, d = propagate_info(S, J, d) if failed return nothing end if !changed break end end # add new things that make J bigger # update_crel!(J, ns) # Flag (co)cones as defined, now that we've added the newstuff for c in filter(c->c.apex ∉ d[1], vcat(S.cones,S.cocones)) if (c.d[:vlabel] ⊆ d[1]) && (c.d[:elabel] ⊆ d[2]) if verbose println("flagging $(c.apex) as defined: $(sizes(S, J)) \n\td $d") end push!(d[1], c.apex) union!(d[2], Set(last.(c.legs))) end end # crel_to_cset(S, J) # println("J Res "); show(stdout, "text/plain", crel_to_cset(S, J)[1]) fail, J = handle_zero_one(S, J, d) # doesn't modify J update_defined!(S, J, d) if fail return nothing end pri = priority(S, d, [k for (k,v) in lc if !isempty(v)]) if isnothing(pri) return (J, d, b_success) end i::Union{Int,Nothing} = haskey(lc, pri) ? first(collect(lc[pri])) : nothing return (J, d, get_possibilities(S, J, d, pri, i)) end """Set cardinalities of 0 and 1 objects correctly + maps into 1""" function handle_zero_one(S::Sketch, J::StructACSet, d::Defined)::Pair{Bool,StructACSet} J = deepcopy(J) eq = init_eq(S, J) for t1 in one_ob(S) push!(d[1], t1) unions!(eq[t1], collect(parts(J, t1))) if nparts(J, t1) == 0 add_part!(J, t1) end for e in filter(e-> tgt(S,e)==t1, elabel(S)) [add_rel!(S, J, d, e, i, 1) for i in parts(J, src(S, e))] end end merge!(S, J, eq) for t0 in zero_ob(S) push!(d[1], t0) if nparts(J, t0) > 0 return true => J end end return false => J end """ Use path equalities, functionality of FK relations, cone/cocone constraints to generate new data and to quotient existing data. Separate information that can be safely applied within a while loop (i.e. everything except for things related to newly added elements). """ function propagate_info(S::Sketch, J::StructACSet, d::Defined )::Tuple{Bool, Bool, StructACSet, LoneCones, Defined} verbose, changed = false, false eq = init_eq(S, J) # trivial equivalence classes # Path Eqs pchanged, pfail = path_eqs!(S,J,eq,d) changed |= pchanged if pfail return (changed, true, J, LoneCones(), d) end if verbose println("\tpchanged $pchanged: $(sizes(S, J)) ") end if pchanged update_defined!(S,J,d) end # Cones cchanged, cfail = compute_cones!(S, J, eq, d) changed |= cchanged if cfail return (changed, true, J, LoneCones(), d) end if verbose println("\tcchanged $cchanged $(sizes(S, J)) ") end if cchanged update_defined!(S,J,d) end # Cocones cochanged, cfail, lone_cones = compute_cocones!(S, J, eq, d) if verbose println("\tcochanged $cochanged: $(sizes(S, J)) ") end changed |= cochanged if cfail return (changed, true, J, LoneCones(), d) end if cochanged update_defined!(S,J,d) end # because this is at the end, chased premodels should be functional fchanged, ffail = fun_eqs!(S, J, eq, d) if verbose println("\tfchanged $fchanged: $(sizes(S, J))") end changed |= fchanged if ffail return (changed, true, J, LoneCones(), d) end if fchanged update_defined!(S,J,d) end cs = crel_to_cset(S, J) # will trigger a fail if it's nonfunctional #if verbose show(stdout, "text/plain", cs[1]) end return (changed, false, J, lone_cones, d) end """ For each unspecified FK, determine its possible outputs that don't IMMEDIATELY violate a cone/cocone constraint. Additionally consider an option that the FK points to a fresh element in the codomain table. It may seem like, if many sets of possibilities has only one option, that we could safely apply all of them at once. However, this is not true. If a₁ and a₂ map to B (which is empty), then branching on either of these has one possibility; but the pair of them has two possibilities (both map to fresh b₁, or map to fresh b₁ and b₂). """ function get_possibilities(S::Sketch, J::StructACSet, d::Defined, sym::Symbol, i::Union{Nothing, Int}=nothing)::Branch if isnothing(i) # branching on a foreign key src_tab, tgt_tab = src(S,sym), tgt(S,sym) esrc, _ = add_srctgt(sym) # sym ∉ d[2] || error("$d but branching $sym: $src_tab -> $tgt_tab") u = first(setdiff(parts(J,src_tab), J[esrc])) # possibilities of setting `u`'s value of FK `e` subres = Poss[] # First possibility: a `e` sends `u` to a fresh ID if tgt_tab ∉ d[1] mu = Modify() mu.newstuff.ns[tgt_tab][(sym, u)] = NewElem() push!(mu.newstuff.ns[tgt_tab][(sym, u)].map_in[sym], u) push!(subres, (sym, 0, mu)) end # Remaining possibilities (check satisfiability w/r/t cocones/cones) for p in 1:nparts(J,tgt_tab) m = Modify() push!(m.update, (sym, u, p)) push!(subres, (sym, p, m)) end return Branch(sym, u, subres) else # Orphan cocone apex element. cocone = only([c for c in S.cocones if c.apex == sym]) val = first(vs) # They're all symmetric, so we just need one. subres = Poss[] # all possible ways to map to an element of this cocone for leg in last.(cocone.legs) srctab = src(S, leg) src_fk = add_srctgt(leg)[1] # Consider a new element being added and mapping along this leg if srctab ∉ z1 && srctab ∉ d[1] fresh = Modify() fresh.newstuff.ns[srctab][(k, leg)] = NewElem() fresh.newstuff.ns[srctab][(k, leg)].map_out[leg] = val push!(subres, (leg, nparts(J, srctab) + 1, fresh)) end # Consider existing elements for which this leg has not yet been set for u in setdiff(parts(J, srctab), J[src_fk]) m = Modify() push!(m.update, (leg, u, val)) push!(subres, (leg, u, m)) end end return Branch(cocone.apex, val, subres) end end # DB #### """Explore a premodel and add its results to the DB.""" function chase_step_db(db::T, S::Sketch, premodel_id::Int, redo::Bool=false)::Pair{Bool, Vector{Int}} where {T<:DBLike} verbose = 1 # Check if already done if !redo redo_res = handle_redo(db, premodel_id) if !isnothing(redo_res) return redo_res end end J_, d_ = get_premodel(db, S, premodel_id) if verbose > 0 println("CHASING PREMODEL #$premodel_id: $(sizes(S, J_))") end # show(stdout, "text/plain", crel_to_cset(S, J_)[1]) println("before chase step $d_") cs_res = chase_step(S, J_, d_) # Failure if isnothing(cs_res) if verbose > 0 println("\t#$premodel_id: Fail") end set_fired(db, premodel_id) set_failed(db, premodel_id, true) return false => Int[] end # Success set_failed(db, premodel_id, false) J, d, branch = cs_res println("branch $branch d $d") println("\tChased premodel: $(sizes(S, J))") # show(stdout, "text/plain", crel_to_cset(S, J)[1]) chased_id = add_premodel(db, S, J, d; parent=premodel_id) println("new chased id = $chased_id") # Check we have a real model if branch == b_success if verbose > 0 println("\t\tFOUND MODEL") end println("J-> $(crel_to_cset(S,J)[1])") return true => [add_model(db, S, J, d, chased_id)] else if verbose > 0 println("\tBranching #$premodel_id on $(branch.branch)") end res = Int[] for (e,i,mod) in branch.poss (J__, d__) = deepcopy((J,d)) update_crel!(S, J__, d__, mod) bstr = string((branch.branch, branch.val, e, i)) push!(res, add_branch(db, S, bstr, chased_id, J__, d__)) end return false => res end end """ If there's nothing to redo, return nothing. Otherwise return whether or not the premodel is a model + its value """ function handle_redo(db::Db, premodel_id::Int )::Union{Nothing,Pair{Bool,Vector{Int}}} z = columntable(execute(db.conn, """SELECT 1 FROM Premodel WHERE Premodel_id=\$1 AND failed IS NULL""", [premodel_id])) if isempty(z) z = columntable(execute(db.conn, """SELECT Model_id FROM Model WHERE Premodel_id=\$1""", [premodel_id])) if !isempty(z) return true => [only(z[:premodel_id])] else z = columntable(execute(db.conn, """SELECT Choice.child FROM Fired JOIN Choice ON Fired.child=Choice.parent WHERE Fired.parent=\$1""", [premodel_id])) return false => collect(z[:child]) end end end """ """ function handle_redo(es::EnumState, premodel_id::Int )::Union{Nothing,Pair{Bool,Vector{Int}}} if premodel_id <= length(es.pk) return nothing end hsh = es.pk[premodel_id] return (hsh ∈ es.models) => [premodel_id] end """ Find all models below a certain cardinality. Sometimes this exploration process generates models *larger* than what we start off with, yet these are eventually condensed to a small size. `extra` controls how much bigger than the initial cardinality we are willing to explore intermediate models. `ignore_seen` skips checking things in the database that were already chased. If true, the final list of models may be incomplete, but it could be more efficient if the goal of calling this function is merely to make sure all models are in the database itself. """ function chase_below(db::DBLike, S::Sketch, n::Int; extra::Int=3, filt::Function=(x->true))::Nothing ms = [] for combo in combos_below(length(free_obs(S)), n) ps = mk_pairs(collect(zip(free_obs(S), combo))) if filt(Dict(ps)) premod = create_premodel(S, ps) push!(ms,premod=>init_defined(S, premod)) end end chase_set(db, S, ms, n+extra) end """ Keep processing until none remain v is Vector{Pair{StructACSet,Defined}} """ function chase_set(db::DBLike,S::Sketch, v::Vector, n::Int)::Nothing for (m,d) in v add_premodel(db, S, m, d) end while true todo = get_premodel_ids(db; sketch=S, maxsize=n) if isempty(todo) break else #pmap(mdl -> chase_step_db(db, S, mdl), todo) for mdl in todo # Threads.@threads? chase_step_db(db, S, mdl) end end end end # Equalities ############ """ Note which elements are equal due to relations actually representing functions a₁ -> b₁ a₂ -> b₂ a₁ -> b₃ a₃ -> b₄ Because a₁ is mapped to b₁ and b₃, we deduce b₁=b₃. If the equivalence relation has it such that a₂=a₃, then we'd likewise conclude b₂=b₄ Quotients by the equivalence class at the end """ function fun_eqs!(S::Sketch, J::StructACSet, eqclass::EqClass, def::Defined )::Pair{Bool,Bool} # println([k=>(nparts(J,k),length(v)) for (k,v) in pairs(eqclass)]) cols = [:elabel, [:src, :vlabel], [:tgt, :vlabel]] changed = false ni = non_id(S) for (d, srcobj, tgtobj) in collect(zip([S.schema[x][ni] for x in cols]...)) dsrc, dtgt = add_srctgt(d) srcobj, tgtobj = src(S, d), tgt(S,d) for src_eqset in collect.(eq_sets(eqclass[srcobj]; remove_singles=false)) tgtvals = Set(J[vcat(incident(J, src_eqset, dsrc)...), dtgt]) if length(tgtvals) > 1 if tgtobj ∈ def[1] #println("Fun Eq of $d (src: $src_eqset) merges $tgtobj: $tgtvals") #show(stdout, "text/plain", J) return changed => true else for (i,j) in Iterators.product(tgtvals, tgtvals) if !in_same_set(eqclass[tgtobj], i, j) changed = true union!(eqclass[tgtobj], i, j) end end end end end end merge!(S, J, eqclass) return changed => false end # Path equality ############### """ Use set of path equalities starting from the same vertex to possibly resolve some foreign key values. Each set of equalities induces a rooted diagram ↗B↘ X -> A ↘ C ↗ - We can imagine associated with each vertex there is a set of possible values. - We initialize the diagram with a singleton value at the root (and do this for each object in the root's table). - For each arrow out of a singleton object where we know the value of that FK, we can set the value of the target to that value. - For each arrow INTO a table with some information, we can restrict the poss values of the source by looking at the preimage (this only works if this arrow is TOTALLY defined). - Iterate until no information is left to be gained """ function path_eqs!(S::Sketch, J::StructACSet, eqclasses::EqClass, d::Defined)::Pair{Bool, Bool} changed = false for (s, eqd) in zip(S.schema[:vlabel], S.eqs) poss_ = [eq_reps(eqclasses[v]) for v in eqd[:vlabel]] for v in eq_reps(eqclasses[s]) poss = deepcopy(poss_) poss[1], change = [v], Set([1]) while !isempty(change) new_changed, change = prop_path_eq_info!(S, J, eqclasses, d, changed, eqd, poss, change) changed |= new_changed if isnothing(change) return changed => true end # FAILED end end end return changed => false end """Change = tables that have had information added to them""" function prop_path_eq_info!(S, J, eq, d, changed, eqd, poss, change )::Tuple{Bool, Union{Nothing,Set{Int}}} newchange = Set{Int}() for c in change for arr_out_ind in incident(eqd, c, :src) arr_out, t_ind = eqd[arr_out_ind, :elabel], eqd[arr_out_ind, :tgt] ttab = eqd[t_ind, :vlabel] as, at = add_srctgt(arr_out) if poss[c] ⊆ J[as] # we know the image of this set of values tgt_vals = [find_root!(eq[ttab],x) for x in J[vcat(incident(J, poss[c], as)...), at]] if !(poss[t_ind] ⊆ tgt_vals) # we've gained information intersect!(poss[t_ind], tgt_vals) if isempty(poss[t_ind]) return changed, nothing end push!(newchange, t_ind) if length(poss[t_ind]) == 1 # we can set FKs into this table changed |= set_fks!(S, J, d, eqd, poss, t_ind) end end end end for arr_in_ind in incident(eqd, c, :tgt) arr_in, s_ind = eqd[arr_in_ind, :elabel], eqd[arr_in_ind, :src] stab = eqd[s_ind, :vlabel] if arr_in ∈ d[2] && stab ∈ d[1] # only can infer backwards if this is true as, at = add_srctgt(arr_in) src_vals = [find_root!(eq[stab],x) for x in J[vcat(incident(J, poss[c], at)...), as]] if !(poss[s_ind] ⊆ src_vals) # gained information intersect!(poss[s_ind], src_vals) if isempty(poss[s_ind]) return changed, nothing end push!(newchange, s_ind) if length(poss[s_ind]) == 1 # we can set FKs into this table changed |= set_fks!(S, J, d, eqd, poss, s_ind) end end end end end return changed, newchange end """Helper for prop_path_eq_info""" function set_fks!(S, J, d, eqd, poss, t_ind)::Bool changed = false for e_ind in incident(eqd, t_ind, :src) e, tgt_ind = eqd[e_ind, :elabel], eqd[e_ind, :tgt] if length(poss[tgt_ind]) == 1 x, y= only(poss[t_ind]), only(poss[tgt_ind]) if !has_map(J, e, x, y) add_rel!(S, J, d, e, x, y) changed = true end end end for e_ind in incident(eqd, t_ind, :tgt) e, src_ind = eqd[e_ind, :elabel], eqd[e_ind, :src] if length(poss[src_ind]) == 1 x, y = only(poss[src_ind]), only(poss[t_ind]) if !has_map(J, e, x, y) add_rel!(S, J, d, e, x, y) changed = true end end end return changed end # Misc ###### """ 1. Enumerate elements of ℕᵏ for an underlying graph with k nodes. 2. For each of these: (c₁, ..., cₖ) create a term model with that many constants Do the first enumeration by incrementing n_nonzero and finding partitions so that ∑(c₁,...) = n_nonzero In the future, this function will write results to a database that hashes the Sketch as well as the set of constants that generated the model. Also crucial is to decompose Sketch into subparts that can be efficiently solved and have solutions stitched together. """ function combos_below(m::Int, n::Int)::Vector{Vector{Int}} res = Set{Vector{Int}}([zeros(Int,m)]) n_const = 0 # total number of constants across all sets for n_const in 1:n for n_nonzero in 1:m # values we'll assign to nodes c_parts = partitions(n_const, n_nonzero) # Which nodes we'll assign them to indices = permutations(1:m,n_nonzero) for c_partition in c_parts for index_assignment in indices v = zeros(Int, m) v[index_assignment] = vcat(c_partition...) push!(res, v) end end end end return sort(collect(res)) end # Branching decision logic ########################## """ Branch priority - this is an art b/c patheqs & cones are two incommensurate ways that a piece of information could be useful. We'll prioritize cones: 1. Defined->Defined AND in the diagram of (co)cones: weigh by # of (co)cones 2. Cocone orphan - order to minimize legs to undefined and then minimize legs 3. Defined->Undefined AND in the diagram of (co)cones 4. Defined -> Defined (no cone, weigh by # of path eqs) 5: Defined->Undefined (no cone, weigh by # of path eqs) 6: Undefined -> Defined (weigh by path eqs) 7: Undefined -> Undefined (weigh by path eqs) """ function priority(S::Sketch, d::Defined, cco::Vector{Symbol} )::Union{Nothing, Symbol} dobs, dhoms = d udobs = setdiff(S.schema[:vlabel], dobs) ls = limit_scores(S, d) hs = (a,b) -> [(h, hom_score(S,ls, h)) for h in hom_set(S,a,b) if h ∉ dhoms] hdd = hs(dobs,dobs) hddl = collect(filter(x->x[2][1]>0, hdd)) if !isempty(hddl) return first(last(sort(hddl, by=x->x[2][1]))) # CASE 1 elseif !isempty(cco) return first(sort(cco, by=cocone_score(S, d))) # CASE 2 end hdu =hs(dobs,udobs) hudl = collect(filter(x -> x[2][1] > 0, hdd)) if !isempty(hudl) return first(last(sort(hudl, by=x->x[2][1]))) # CASE 3 elseif !isempty(hdd) return first(last(sort(hdd, by=x->x[2][2]))) # CASE 4 elseif !isempty(hdu) return first(last(sort(hdu, by=x->x[2][2]))) # CASE 5 end hud = hs(udobs,dobs) if !isempty(hud) return first(last(sort(hud, by=x->x[2][2]))) # CASE 6 end huu = hs(udobs,udobs) if !isempty(huu) return first(last(sort(huu, by=x->x[2][2]))) # CASE 7 end return nothing end """minimize (legs w/ undefined tgts, undefined legs, total # of legs)""" function cocone_score(S::Sketch, d::Defined)::Function function f(c::Symbol)::Tuple{Int,Int,Int} cc = only([cc for cc in S.cocones if cc.apex == c]) srcs = filter(z->z ∉ d[1], [cc.d[x, :vlabel] for x in first.(cc.legs)]) (length(srcs),length(filter(l->l ∉d[2], cc.legs)),length(cc.legs)) end return f end hom_score(S::Sketch, ls::Dict{Symbol, Int}, h::Symbol) = ( limit_score(S,ls,h), eq_score(S,h)) eq_score(S::Sketch, h::Symbol) = sum([count(==(h), d[:elabel]) for d in S.eqs]) """ Evaluate the desirability of knowing more about a hom based on limit definedness. Has precomputed desirability of each limit as an argument. """ limit_score(S::Sketch,ls::Dict{Symbol, Int},h::Symbol) = sum( [ls[c.apex] for c in vcat(S.cones, S.cocones) if h ∈ c.d[:elabel]]) """Give each undefined limit object a score for how undefined it is:""" limit_scores(S::Sketch, d::Defined) = Dict([c.apex=>limit_obj_definedness(d,c) for c in vcat(S.cones,S.cocones)]) """ Evaluate undefinedness of a limit object: (# of undefined objs, # of undefined homs) We should focus on resolving morphisms of almost-defined limit objects, so we give a high score to something with a little bit missing, low score to things with lots missing, and zero to things that are fully defined. """ function limit_obj_definedness(d::Defined, c::Cone)::Int dob, dhom = d udob, udhom = setdiff(Set(c.d[:vlabel]), dob), setdiff(Set(c.d[:elabel]), dhom) if isempty(vcat(udob,udhom)) return typemin(Int) else return -(100*length(udob) + length(udhom)) end end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
10634
module Models export EqClass, NewElem, NewStuff, Modify, mk_pairs, eq_sets, eq_dicts, eq_reps, update_crel!, has_map, create_premodel, crel_to_cset, init_eq, add_rel!, unions!, init_defined, update_defined!, is_total """ Functions for the manipulation of models/premodels """ using ..Sketches using Catlab.CategoricalAlgebra using Catlab.Graphs: refl using DataStructures using AutoHashEquals import Base: merge!, show # Data structures ################# const EqClass = Dict{Symbol, IntDisjointSets} mutable struct NewElem map_in :: DefaultDict{Symbol, Vector{Int}} map_out :: Dict{Symbol, Int} function NewElem() return new(DefaultDict{Symbol, Vector{Int}}(()->Int[]), Dict{Symbol, Int}()) end end function Base.show(io::IO, ne::NewElem) ins = collect(filter(kv->!isempty(kv[2]), pairs(ne.map_in))) outs = collect(pairs(ne.map_out)) print(io, "NE(") if !isempty(ins) print(io, "{") for (k,v) in ins print(io, "$k:$v,") end print(io, "},") end if !isempty(outs) print(io, "{") for (k,v) in outs print(io, "$k:$v,") end print(io, "\b})") end print(io, ")") end mutable struct NewStuff ns::DefaultDict{Symbol, Dict{Any, NewElem}} function NewStuff() return new(DefaultDict{Symbol, Dict{Any, NewElem}}( ()->Dict{Any, NewElem}())) end end function Base.show(io::IO, ns::NewStuff) print(io, "NS("*(isempty(ns.ns) ? " " : "")) for (k,v) in filter(kv->!isempty(kv[2]), pairs(ns.ns)) print(io, "$k:$v,") end print(io, "\b)") end const IType = Union{Nothing, Vector{Pair{Symbol, Int}}, StructACSet} # Modify ######## @auto_hash_equals mutable struct Modify newstuff::NewStuff update::Set{Tuple{Symbol, Int, Int}} end function Modify() return Modify(NewStuff(), Set{Tuple{Symbol, Int, Int}}()) end """Mergy `y` into `x`""" function Base.union!(x::NewStuff, y::NewStuff) for (tab, new_elems) in pairs(y.ns) for (k, ne) in pairs(new_elems) if haskey(x[tab], k) err = """Can't merge $x and $y because of $tab: $key ... or should we merge the new elems???""" ne == x[tab][k] || error(err) else x[tab][k] = ne end end end end # Generic helpers ################# function mk_pairs(v::Vector{Tuple{T1,T2}})::Vector{Pair{T1,T2}} where {T1,T2} [a=>b for (a,b) in v] end # Helper for IntDisjointSets ############################ """ Get the equivalence classes out of an equivalence relation. Pick the lowest value as the canonical representative. """ function eq_sets(eq::IntDisjointSets; remove_singles::Bool=false)::Set{Set{Int}} eqsets = DefaultDict{Int,Set{Int}}(Set{Int}) for i in 1:length(eq) push!(eqsets[find_root!(eq, i)], i) end filt = v -> !(remove_singles && length(v)==1) return Set(filter(filt, collect(values(eqsets)))) end """ Get a function which maps an ACSet part to the minimum element of its eq class """ function eq_dicts(eq::EqClass)::Dict{Symbol, Dict{Int,Int}} res = Dict{Symbol, Dict{Int,Int}}() for (k, v) in pairs(eq) d = Dict{Int, Int}() for es in eq_sets(v) m = minimum(es) for e in es d[e] = m end end res[k] = d end return res end """ Pick root element from each equivalence class Possible alternative: use `minimum` instead """ eq_reps(eq::IntDisjointSets)::Vector{Int} = [find_root!(eq, first(s)) for s in eq_sets(eq; remove_singles=false)] """Union more than two elements, pairwise. Return the pairs used.""" function unions!(eq::IntDisjointSets, vs::Vector{Int})::Vector{Pair{Int,Int}} ps = length(vs) > 1 ? [x=>y for (x,y) in zip(vs, vs[2:end])] : Pair{Int,Int}[] for (v1, v2) in ps union!(eq, v1, v2) end return ps end ids_eq(e1::IntDisjointSets, e2::IntDisjointSets)::Bool = eq_sets(e1) == eq_sets(e2) eqclass_eq(e1::EqClass, e2::EqClass)::Bool = Set(keys(e1))==Set(keys(e2)) && all([ids_eq(e1[k],e2[k]) for k in keys(e1)]) """Apply table-indexed equivalence relation to a vector of values (with an equal-length vector of tables)""" function eq_vec(eqclass::EqClass, tabs::Vector{Symbol}, inds::Vector{Int}) length(tabs) == length(inds) || error("eq_vec needs equal length tabs/inds") [find_root!(eqclass[t], i) for (t, i) in zip(tabs, inds)] end """Initialize equivalence classes for a premodel""" function init_eq(S::Sketch, J::StructACSet)::EqClass init_eq([o=>nparts(J, o) for o in S.schema[:vlabel]]) end function init_eq(v::Vector{Pair{Symbol, Int}})::EqClass EqClass([o=>IntDisjointSets(n) for (o, n) in v]) end # Premodel/Model conversion ########################### """ Convert a premodel (C-Rel) to a model C-Set. Elements that are not mapped by a relation are given a target value of 0. If this happens at all, an output bool will be true If the same element is mapped to multiple outputs, an error is thrown. """ function crel_to_cset(S::Sketch, J::StructACSet)::Pair{StructACSet, Bool} res = S.cset() # grph_to_cset(S.name, S.schema) for o in S.schema[:vlabel] add_parts!(res, o, nparts(J, o)) end partial = false for m in elabel(S) msrc, mtgt = add_srctgt(m) length(J[msrc]) == length(Set(J[msrc])) || error("nonfunctional $J") partial |= length(J[msrc]) != nparts(J, src(S, m)) for (domval, codomval) in zip(J[msrc], J[mtgt]) set_subpart!(res, domval, m, codomval) end end return res => partial end """Check if a morphism in a premodel is total, modulo equivalence classes""" function is_total(S::Sketch, J::StructACSet, e::Symbol, eqc::EqClass=nothing) if nparts(J, src(S,e)) == 0 return true end # trivially total if domain is ∅ eqc = isnothing(eqc) ? init_eq(S, J) : eqc sreps = [find_root!(eqc[src(S,e)],x) for x in J[add_srctgt(e)[1]]] missin = setdiff(eq_reps(eqc[src(S,e)]), sreps) !isempty(missin) end """ Create a premodel (C-Rel) from either - a model - a dict of cardinalities for each object (all map tables empty) - nothing (empty C-Rel result) """ function create_premodel(S::Sketch, I::IType=nothing)::StructACSet if !(I isa StructACSet) dic = deepcopy(I) I = S.cset() for (k, v) in (dic === nothing ? [] : dic) add_parts!(I, k, v) end end J = S.crel() # grph_to_crel(S.name, S.schema) # Initialize data in J from I for o in S.schema[:vlabel] add_parts!(J, o, nparts(I, o)) end for d in elabel(S) hs, ht = add_srctgt(d) for (i, v) in filter(x->x[2]!=0, collect(enumerate(I[d]))) n = add_part!(J, d) set_subpart!(J, n, hs, i) set_subpart!(J, n, ht, v) end end return J end # Modifying CSets ################## """Use equivalence class data to reduce size of a premodel""" function merge!(S::Sketch, J::StructACSet, eqclasses::EqClass )::Dict{Symbol, Dict{Int,Int}} verbose = false # Initialize a function mapping values to their new (quotiented) value μ = eq_dicts(eqclasses) # Initialize a record of which values are to be deleted delob = DefaultDict{Symbol, Vector{Int}}(Vector{Int}) # Populate `delob` from `eqclasses` for (o, eq) in pairs(eqclasses) eqsets = eq_sets(eq; remove_singles=true) # Minimum element is the representative for vs in map(collect,collect(values(eqsets))) m = minimum(vs) vs_ = [v for v in vs if v!=m] append!(delob[o], collect(vs_)) end end # Replace all instances of a class with its representative in J # could be done in parallel for d in elabel(S) dsrc, dtgt = add_srctgt(d) μsrc, μtgt = μ[src(S, d)], μ[tgt(S, d)] isempty(μsrc) || set_subpart!(J, dsrc, replace(J[dsrc], μsrc...)) isempty(μtgt) || set_subpart!(J, dtgt, replace(J[dtgt], μtgt...)) end # Detect redundant duplicate relation rows for d in elabel(S) # could be done in parallel dsrc, dtgt = add_srctgt(d) seen = Set{Tuple{Int,Int}}() for (i, st) in enumerate(zip(J[dsrc], J[dtgt])) if st ∈ seen push!(delob[d], i) else push!(seen, st) end end end # Remove redundant duplicate relation rows for (o, vs) in collect(delob) isempty(vs) || rem_parts!(J, o, sort(vs)) end return μ end """ Apply the additions updates specified in a NewStuff to a CSet """ function update_crel!(J::StructACSet, nw::NewStuff) for (ob, vs) in pairs(nw.ns) for n_e in values(vs) add_newelem!(J, ob, n_e) end end end function add_newelem!(J::StructACSet, ob::Symbol, n_e::NewElem) new_id = add_part!(J, ob) for (mo, moval) in pairs(n_e.map_out) d = Dict(zip(add_srctgt(mo), [new_id, moval])) add_part!(J, mo; d...) end for (mi, mivals) in pairs(n_e.map_in) for mival in mivals d = Dict(zip(add_srctgt(mi), [mival, new_id])) add_part!(J, mi; d...) end end end function update_crel!(S::Sketch, J::StructACSet, d::Defined, m::Modify) update_crel!(J, m.newstuff) for (k, i, j) in m.update add_rel!(S, J, d, k , i, j) end end function add_rel!(S::Sketch, J::StructACSet, d::Defined, f::Symbol, i::Int, j::Int) add_part!(J, f; Dict(zip(add_srctgt(f), [i,j]))...) update_defined!(S, J, d, f) end # Querying CRel ################ function has_map(J::StructACSet, f::Symbol, x::Int, y::Int)::Bool from_map, to_map = add_srctgt(f) return (x,y) ∈ collect(zip(J[from_map], J[to_map])) end """Check for map, modulo equivalence""" function has_map(S::Sketch, J::StructACSet, f::Symbol, x::Int, y::Int, eq::EqClass)::Bool from_map, to_map = add_srctgt(f) s, t = src(S, f), tgt(S, f) s_eq = i -> find_root!(eq[s], i) t_eq = i -> find_root!(eq[t], i) st = (s_eq(x), t_eq(y)) return st ∈ collect(zip(s_eq.(J[from_map]), t_eq.(J[to_map]))) end # Defined ######### function init_defined(S::Sketch, J::StructACSet)::Defined d = free_obs(S) => Set{Symbol}() update_defined!(S, J, d) return d end """ Return a new Defined object with updates: - A hom that has a value for all elements of its domain - A limit object that has all objects AND homs in its diagram defined (but only right after compute_cone! or compute_cocone! is run, so we do not handle that here) Return whether a change was made or not """ function update_defined!(S::Sketch, J::StructACSet, d::Defined, f::Union{Symbol,Nothing}=nothing)::Bool _, dhom = d changed = false for h in setdiff(isnothing(f) ? elabel(S) : [f], dhom) s = src(S,h) if s ∈ d[1] && isempty(setdiff(parts(J, s), J[add_srctgt(h)[1]])) push!(dhom, h) # println("$h is now defined! ") # show(stdout, "text/plain", crel_to_cset(S, J)[1]) changed |= true end end return changed end end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
14274
module Sketches export Sketch, S0, LabeledGraph, Cone, Defined, d0, dual, free_obs, relsize, sizes, sketch_from_json, to_json, add_srctgt, sizes, zero_ob, one_ob, hom_set, hom_in,hom_out,elabel, non_id """Basic data structures for limit sketches""" using Catlab.Present, Catlab.Graphs, Catlab.Theories, Catlab.CategoricalAlgebra using Catlab.Graphs.BasicGraphs: TheoryReflexiveGraph, AbstractGraph using Catlab.CategoricalAlgebra.CSetDataStructures: struct_acset import Catlab.Theories: dual import Catlab.Graphs: src, tgt, topological_sort, refl, inneighbors, outneighbors using CSetAutomorphisms using JSON using AutoHashEquals using DataStructures: DefaultDict import Base: isempty ###################################### const Defined = Pair{Set{Symbol},Set{Symbol}} const d0 = Set{Symbol}() => Set{Symbol}() inneighbors(g::T, v::Int) where {T<:AbstractReflexiveGraph} = subpart(g, incident(g, v, :tgt), :src) outneighbors(g::T, v::Int) where {T<:AbstractReflexiveGraph} = subpart(g, incident(g, v, :src), :tgt) """Edges and vertices labeled by symbols""" @present TheoryLabeledGraph <: TheoryReflexiveGraph begin Label::AttrType vlabel::Attr(V,Label) elabel::Attr(E,Label) end; @acset_type LabeledGraph_( TheoryLabeledGraph, index=[:src,:tgt,:vlabel, :elabel] ) <: AbstractReflexiveGraph const LabeledGraph = LabeledGraph_{Symbol} """Forget about the labels""" function to_graph(lg::LabeledGraph_)::Graph G = Graph(nparts(lg, :V)) s, t= [] add_edges!(G, lg[:src], lg[:tgt]) return G end """Data of a cone (or a cocone)""" @auto_hash_equals struct Cone d::LabeledGraph apex::Symbol legs::Vector{Pair{Int, Symbol}} function Cone(d::LabeledGraph, apex::Symbol, legs::Vector{Pair{Int, Symbol}}) l1, _ = zip(legs...) # l2 might have duplicates, e.g. monomorphism cone length(Set(l1)) == length(legs) || error("nonunique legs $legs") return new(d, apex, legs) end end """ A finite-limit, finite-colimit sketch. Auto-generates data types for C-sets (representing models, i.e. functors from the schema to Set) and C-rels (for representing premodels, which may not satisfy equations/(co)limit constraints) """ @auto_hash_equals struct Sketch name::Symbol schema::LabeledGraph cones::Vector{Cone} cocones::Vector{Cone} eqs::Vector{LabeledGraph} cset::Type cset_pres::Presentation crel::Type crel_pres::Presentation function Sketch(name::Symbol, schema::LabeledGraph, cones::Vector{Cone}, cocones::Vector{Cone}, eqs::V) where V<:AbstractVector namechars = join(vcat(schema[:vlabel], schema[:elabel])) r = Set(refl(schema)) e = "BAD SYMBOL in $schema" all([!occursin(x, namechars) for x in [",", "|"]]) || error(e) function grph_to_cset(name::Symbol, sketch::LabeledGraph )::Pair{Type, Presentation} pres = Presentation(FreeSchema) xobs = [Ob(FreeSchema, s) for s in sketch[:vlabel]] for x in xobs add_generator!(pres, x) end z = zip(sketch[:elabel], sketch[:src], sketch[:tgt]) for (i,(e, src, tgt)) in enumerate(z) if i ∉ r add_generator!(pres, Hom(e, xobs[src], xobs[tgt])) end end expr = struct_acset(name, StructACSet, pres, index=sketch[:elabel]) eval(expr) return eval(name) => pres end function grph_to_crel(name::Symbol,sketch::LabeledGraph )::Pair{Type,Presentation} name_ = Symbol("rel_$name") pres = Presentation(FreeSchema) nv = length(sketch[:vlabel]) alledge = vcat([add_srctgt(e) for (i,e) in enumerate(sketch[:elabel]) if i ∉ r]...) labs = [sketch[:vlabel]..., [e for (i,e) in enumerate(sketch[:elabel]) if i ∉ r]...] xobs = [Ob(FreeSchema, s) for s in labs] for x in xobs add_generator!(pres, x) end z = collect(enumerate(zip(sketch[:elabel],sketch[:src], sketch[:tgt]))) for (i,(_,(e, src_, tgt_))) in enumerate(filter(i->i[1]∉r, z)) s, t = add_srctgt(e) add_generator!(pres, Hom(s, xobs[nv+i], xobs[src_])) add_generator!(pres, Hom(t, xobs[nv+i], xobs[tgt_])) end expr = struct_acset(name_, StructACSet, pres, index=alledge) eval(expr) return eval(name_) => pres end function check_eq(p::Vector,q::Vector)::Nothing # Get sequence of edge numbers in the schema graph pe, qe = [[only(incident(schema, edge, :elabel)) for edge in x] for x in [p,q]] ps, qs = [isempty(x) ? nothing : schema[:src][x[1]] for x in [pe, qe]] isempty(qe) || ps == qs || error( "path eq don't share start point \n\t$p ($ps) \n\t$q ($qs)") pen, qen = [isempty(x) ? nothing : schema[:tgt][x[end]] for x in [pe,qe]] isempty(qe) || pen == qen || error( "path eq don't share end point \n\t$p ($pen) \n\t$q ($qen)") !isempty(qe) || ps == pen|| error( "path eq has self loop but p doesn't have same start/end $p \n$q") all([schema[:tgt][p1]==schema[:src][p2] for (p1, p2) in zip(pe, pe[2:end])]) || error( "head/tail mismatch in p $p \n$q") all([schema[:tgt][q1]==schema[:src][q2] for (q1, q2) in zip(qe, qe[2:end])]) || error( "head/tail mismatch in q $p \n$q") return nothing end function check_cone(c::Cone)::Nothing vert = only(incident(schema, c.apex, :vlabel)) for (v, l) in c.legs edge = only(incident(schema, l, :elabel)) schema[:src][edge] == vert || error("Leg does not come from apex $c") schema[:vlabel][schema[:tgt][edge]] == c.d[:vlabel][v] || error( "Leg $l -> $v does not go to correct vertex $c") is_homomorphic(c.d, schema) || error( "Cone diagram does not map into schema $c") end end function check_cocone(c::Cone)::Nothing vert = only(incident(schema, c.apex, :vlabel)) for (v, l) in c.legs edge = only(incident(schema, l, :elabel)) schema[:tgt][edge] == vert || error( "Leg $l does not go to apex $(c.apex)") schema[:vlabel][schema[:src][edge]] == c.d[:vlabel][v] || error( "Leg $l -> $v does not go to correct vertex $c") is_homomorphic(c.d, schema) || error( "Cone diagram does not map into schema $c") end end if !(isempty(eqs) || first(eqs) isa LabeledGraph) [check_eq(p,q) for (p, q) in eqs] eqds = eqs_to_diagram(schema, eqs) else eqds = isempty(eqs) ? LabeledGraph[] : eqs end [check_cone(c) for c in cones] [check_cocone(c) for c in cocones] cset_type, cset_pres = grph_to_cset(name, schema) crel_type, crel_pres = grph_to_crel(name, schema) return new(name, schema, cones, cocones, eqds, cset_type, cset_pres, crel_type, crel_pres) end end const S0=Sketch(:dummy, LabeledGraph(),Cone[],Cone[],[]) struct SketchMorphism d::Sketch cd::Sketch h::ACSetTransformation # Graph transformation of schemas end """Dual sketch. Optionally rename obs/morphisms and the sketch itself""" function dual(s::Sketch, n::Symbol=Symbol(), obs::Vector{Pair{Symbol, Symbol}}=Pair{Symbol, Symbol}[]) d = Dict(obs) eqsub = ps -> reverse([get(d, p, p) for p in ps]) dname = isempty(string(n)) ? Symbol("$(s.name)"*"_dual") : n dschema = dualgraph(s.schema, d) dcones = [dual(c, d) for c in s.cocones] dccones = [dual(c,d) for c in s.cones] eqs = vcat(diagram_to_eqs.(s.eqs)...) deqs = [[eqsub(p) for p in ps] for ps in eqs] Sketch(dname, dschema, dcones, dccones,deqs) end dual(c::Cone, obs::Dict{Symbol, Symbol}) = Cone(dualgraph(c.d, obs), get(obs,c.apex,c.apex), [(i => get(obs, x, x)) for (i, x) in c.legs]) function dualgraph(lg::LabeledGraph, obd::Dict{Symbol, Symbol}) g = deepcopy(lg) set_subpart!(g, :src, lg[:tgt]) set_subpart!(g, :tgt, lg[:src]) set_subpart!(g, :vlabel, replace(z->get(obd, z, z), g[:vlabel])) set_subpart!(g, :elabel, replace(z->get(obd, z, z), g[:elabel])) return g end src(S::Sketch, e::Symbol) = S.schema[:vlabel][S.schema[:src][ only(incident(S.schema, e, :elabel))]] tgt(S::Sketch, e::Symbol) = S.schema[:vlabel][S.schema[:tgt][ only(incident(S.schema, e, :elabel))]] cone_to_dict(c::Cone) = Dict([ "d"=>generate_json_acset(c.d), "apex"=>string(c.apex),"legs"=>c.legs]) dict_to_cone(d::Dict)::Cone = Cone( parse_json_acset(LabeledGraph,d["d"]), Symbol(d["apex"]), Pair{Int,Symbol}[parse(Int, k)=>Symbol(v) for (k, v) in map(only, d["legs"])]) """TO DO: add cone and eq info to the hash...prob requires CSet for Sketch""" Base.hash(S::Sketch) = call_nauty(to_graph(S.schema)) to_json(S::Sketch) = JSON.json(Dict([ :name=>S.name, :schema=>generate_json_acset(S.schema), :cones => [cone_to_dict(c) for c in S.cones], :cocones => [cone_to_dict(c) for c in S.cocones], :eqs => generate_json_acset.(S.eqs)])) function sketch_from_json(s::String)::Sketch p = JSON.parse(s) Sketch(Symbol(p["name"]), parse_json_acset(LabeledGraph, p["schema"]), [dict_to_cone(d) for d in p["cones"]], [dict_to_cone(d) for d in p["cocones"]], [parse_json_acset(LabeledGraph,e) for e in p["eqs"]]) end add_srctgt(x::Symbol) = Symbol("src_$(x)") => Symbol("tgt_$(x)") """Objects that are not the apex of some (co)cone""" function free_obs(S::Sketch)::Set{Symbol} setdiff(Set(S.schema[:vlabel]), [c.apex for c in vcat(S.cones, S.cocones)]) end zero_ob(S::Sketch) = [c.apex for c in S.cocones if nv(c.d) == 0] one_ob(S::Sketch) = [c.apex for c in S.cones if nv(c.d) == 0] """List of arrows between two sets of vertices""" function hom_set(S::Sketch, d_symbs, cd_symbs)::Vector{Symbol} symbs = [d_symbs, cd_symbs] d_i, cd_i = [vcat(incident(S.schema, x, :vlabel)...) for x in symbs] e_i = setdiff( (vcat(incident(S.schema, d_i, :src)...) ∩ vcat(incident(S.schema, cd_i, :tgt)...)), refl(S.schema) ) return S.schema[e_i, :elabel] end hom_in(S::Sketch, t::Symbol) = hom_set(S, S.schema[:vlabel], [t]) hom_out(S::Sketch, t::Symbol) = hom_set(S, [t], S.schema[:vlabel]) const DD = DefaultDict{Pair{Int,Int},Set{Vector{Int}}} """Enumerate all paths of an acyclic graph, indexed by src+tgt""" function enumerate_paths(G::HasGraph; sorted::Union{AbstractVector{Int},Nothing}=nothing )::DD sorted = isnothing(sorted) ? topological_sort(G) : sorted Path = Vector{Int} paths = [Set{Path}() for _ in 1:nv(G)] # paths that start on a particular V for v in reverse(topological_sort(G)) push!(paths[v], Int[]) # add length 0 paths for e in incident(G, v, :src) push!(paths[v], [e]) # add length 1 paths for p in paths[G[e, :tgt]] # add length >1 paths push!(paths[v], vcat([e], p)) end end end # Restructure `paths` into a data structure indexed by start AND end V allpaths = DefaultDict{Pair{Int,Int},Set{Path}}(()->Set{Path}()) for (s, ps) in enumerate(paths) for p in ps push!(allpaths[s => isempty(p) ? s : G[p[end],:tgt]], p) end end return allpaths end """Add path to commutative diagram without repeating information""" function add_path!(schema::LabeledGraph, lg::LabeledGraph, p::Vector{Symbol}, all_p::Dict{Vector{Symbol}, Int}, eqp::Union{Nothing, Vector{Symbol}}=nothing, ) #all_p = isnothing(all_p) ? union(values(enumerate_paths(lg)...)) : all_p s = only(incident(schema, first(p), :elabel)) for i in 1:length(p) if !haskey(all_p, p[1:i]) e = only(incident(schema, p[i], :elabel)) t = schema[e, [:tgt,:vlabel]] if isnothing(eqp) || i < length(p) new_v = add_part!(lg, :V; vlabel=t) else new_v = all_p[eqp] end s = i == 1 ? 1 : all_p[p[1:i-1]] add_part!(lg, :E; src=s, tgt=new_v, elabel=p[i]) all_p[p[1:i]] = new_v end end end """ Get per-object diagrams encoding all commutative diagrams which start at that point, using the information of pairwise equations eqs:: Vector{Tuple{Symbol, Vector{Symbol}, Vector{Symbol}}} """ function eqs_to_diagram(schema::LabeledGraph, eqs)::Vector{LabeledGraph} lgs = [LabeledGraph() for _ in 1:nv(schema)] all_ps = [Dict{Vector{Symbol}, Int}(Symbol[]=>1) for _ in 1:nv(schema)] for (i, root) in enumerate(schema[:vlabel]) add_part!(lgs[i], :V; vlabel=root) end for (p1, p2) in eqs # TODO: support more than 2 eqs at once src_i = schema[only(incident(schema, first(p1), [:elabel])), :src] if haskey(all_ps[src_i], p2) add_path!(schema, lgs[src_i], p1, all_ps[src_i], Vector{Symbol}(p2)) else add_path!(schema, lgs[src_i], p1, all_ps[src_i]) add_path!(schema, lgs[src_i], p2, all_ps[src_i], p1) end end return lgs end function diagram_to_eqs(g::LabeledGraph) map(filter(x->length(x)>1, collect(values(enumerate_paths(g))))) do ps [g[p,:elabel] for p in ps] end end """ignores the identity morphisms""" elabel(S::Sketch) = elabel(S.schema) elabel(C::Cone) = elabel(C.d) elabel(G::LabeledGraph) = G[non_id(G), :elabel] non_id(S::Sketch) = non_id(S.schema) non_id(G::LabeledGraph) = setdiff(edges(G), G[:refl]) |> collect |> sort end # module # """ # Query that returns all instances of the base pattern. External variables # are labeled by the legs of the cone. # """ # function cone_query(c::Cone)::StructACSet # vars = [Symbol("x$i") for i in nparts(c.d, :V)] # typs = ["$x(_id=x$i)" for (i, x) in enumerate(c.d[:vlabel])] # bodstr = vcat(["begin"], typs) # for (e, s, t) in zip(c.d[:elabel], c.d[:src], c.d[:tgt]) # push!(bodstr, "$e(src_$e=x$s, tgt_$e=x$t)") # end # push!(bodstr, "end") # exstr = "($(join(["$(v)_$i=x$k" for vs in values(vars) # for (i, (k,v)) in enumerate(c.legs)],",") ))" # ctxstr = "($(join(vcat(["x$i::$x" # for (i, x) in enumerate(c.d[:vlabel])],),",")))" # ex = Meta.parse(exstr) # ctx = Meta.parse(ctxstr) # hed = Expr(:where, ex, ctx) # bod = Meta.parse(join(bodstr, "\n")) # if false # println("ex $exstr\n ctx $ctxstr\n bod $(join(bodstr, "\n"))") # end # res = parse_relation_diagram(hed, bod) # return res # end
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
342
module TestDB # using Revise using Test using CombinatorialEnumeration using Catlab.CategoricalAlgebra include(joinpath(@__DIR__, "TestSketch.jl")); J = create_premodel(S); es = EnumState() @test add_premodel(es, S, J) == 1 @test es[1] == J @test add_premodel(es, S, J) == 1 # does not insert again # TODO test other stuff end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
2827
module TestModEnum # using Revise using Test using CombinatorialEnumeration using CombinatorialEnumeration.ModEnum: combos_below include(joinpath(@__DIR__, "TestSketch.jl")); @test length(combos_below(2, 3)) == 10 # model enumeration where |A| = |B| = 1 I = @acset S.cset begin A=1;B=1;I=1;a=1 end es = init_premodel(S,I, [:A,:B]); chase_db(S, es) term = @acset(S.cset, begin A=1;B=1;C=1;E=1;I=1;f=1;g=1;c=1;e=1;a=1;b=1 end) @test test_models(es, S, [term]) # model enumeration where |A| = 1, |B| = 2 I = @acset S.cset begin A=1;B=2;I=1;a=1 end; es = init_premodel(S,I, [:A,:B]); chase_db(S,es); expected = [ # the f&g can point to the same element @acset(S.cset, begin A=1;B=2;E=1;C=2;I=1;f=1;g=1;c=[1,2];a=1;b=1;e=1 end), # or they can point to different elements @acset(S.cset, begin A=1;B=2;C=1;I=1;f=1;g=2;c=1;a=1;b=1 end) ] @test test_models(es, S, expected) # model enumeration where |A| = 2, |B| = 1 I = @acset S.cset begin A=2;B=1 end; es = init_premodel(S,I, [:A,:B]); chase_db(S,es); @test test_models(es, S, [@acset(S.cset, begin A=2;B=1;C=1;E=2;I=1; # both A equalized f=1;g=1;c=1;e=[1,2];a=1;b=1 end)]) # model enumeration where |A| = 2, |B| = 2 I = @acset S.cset begin A=2;B=2 end; es = init_premodel(S,I, [:A,:B]); chase_db(S,es) expected = [ # f&g are both id @acset(S.cset, begin A=2;B=2;E=2;C=2;I=1; f=[1,2];g=[1,2];c=[1,2];a=1;b=1;e=[1,2] end), # f&g are both const, picking out diff B elems @acset(S.cset, begin A=2;B=2;C=1;I=1;f=1;g=2;c=1;a=1;b=1 end), # f&g are not const and different for both A's @acset(S.cset, begin A=2;B=2;C=1;I=1;f=[2,1];g=[1,2];c=1;a=1;b=2 end), # f&g both const and point to same element @acset(S.cset, begin A=2;B=2;E=2;C=2;I=1;f=1;g=1;c=[1,2];e=[1,2];a=1;b=1 end), # f is const, g differs for the A's, so one of the A's is equalized. # "a" points to the element that is equalized. @acset(S.cset, begin A=2;B=2;E=1;C=1;I=1;f=1;g=[1,2];c=1;a=1;b=1;e=1 end), # 2 # f is const, g differs for the A's, so one of the A's is equalized. # "a" points to the element that is not equalized. @acset(S.cset, begin A=2;B=2;E=1;C=1;I=1;f=1;g=[2,1];c=1;a=1;b=1;e=2 end), # 5 # g is const, f differs for the A's, so one of the A's is equalized. # "a" points to the element that is equalized. @acset(S.cset, begin A=2;B=2;E=1;C=1;I=1;f=[1,2];g=1;c=1;a=1;b=1;e=1 end), # 5 # g is const, f differs for the A's, so one of the A's is equalized. # "a" points to the element that is not equalized. @acset(S.cset, begin A=2;B=2;E=1;C=1;I=1;f=[2,1];g=1;c=1;a=1;b=2;e=2 end), # 5 ] @test test_models(es, S, expected) # Merge via functionality I = deepcopy(term) add_part!(I, :E; e=1) es = init_premodel(S,I, [:A,:B]); chase_db(S,es); @test test_models(es, S, [term]) end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
2650
module TestModels # using Revise using Test using CombinatorialEnumeration using Catlab.CategoricalAlgebra using CombinatorialEnumeration.Models: test_premodel include(joinpath(@__DIR__, "TestSketch.jl")); # create_premodel J = create_premodel(S) @test all(x->nparts(J.model, x) == (x == :I ? 1 : 0), S.schema[:vlabel]) # crel_to_cset for partial model emp = @acset S.cset begin I=1; end @test crel_to_cset(S, J) == (emp => true) # Changes ######### J = create_premodel(S) newvals = @acset(S.crel, begin A=1;B=1;I=1;f=1;a=1;b=1; src_f=1;tgt_f=1;src_a=1;tgt_a=1;src_b=1;tgt_b=1 end) ad = Addition(S,J,homomorphism(J.model, newvals), id(J.model)) @test (exec_change(S,J.model,ad) |> codom) == newvals J = test_premodel(S,@acset(S.cset, begin A=5;B=5;f=[1,2,3,4,5] end)) J.aux.frozen = Set{Symbol}() => Set{Symbol}() md = Dict([:A=>[[2,3],[4,5]], :B=>[[1,5]]]) J_ = deepcopy(J) m = Merge(S, J_, md); # Model's eq classes are modified by # constructing Merge @test J_.aux.eqs[:A].parents == [1,2,2,4,4] @test J_.aux.eqs[:B].parents == [1,2,3,4,1] result = codom(exec_change(S, J_.model, m)) @test nparts(result, :A) == 3 @test nparts(result, :B) == 4 J = test_premodel(S,@acset(S.cset, begin A=1;B=1;f=1 end)) @test nparts(rem_dup_relations(S,J.model)|>codom, :f)==1 # Updating the addition of f->[b₁,b₂] with a merging of [a₁,a₂] newvals = @acset(S.crel, begin A=2;B=2;I=1;f=2; Cone_I=1; Cone_I_apex=1 src_f=[1,2];tgt_f=[1,2] end) J = test_premodel(S,@acset(S.cset, begin A=2;I=1;Cone_I=1;Cone_I_apex=1;end)) J.aux.frozen = Set{Symbol}()=>Set{Symbol}() J_orig = deepcopy(J) ad = Addition(S,J, homomorphism(J.model, newvals; monic=true), id(J.model)) m = Merge(S, deepcopy(J), Dict(:A=>[[1,2]])) J_update = exec_change(S,J.model,m); J.model = codom(J_update) ad2 = update_change(S, J, J_update, ad); @test nparts(apex(ad), :A) == 2 @test nparts(apex(ad2), :A) == 1 # Merging overlapping additions J = test_premodel(S, @acset(S.cset, begin A=2;B=2 end)) a1 = add_fk(S, J, :f, 1, 1) a2 = add_fk(S, J, :g, 1, 2) a3 = add_fk(S, J, :f, 1, 2) a = merge(S,J,a1,a2) @test codom(a.l) == @acset S.crel begin A=1;B=2;f=1;g=1;src_f=1;src_g=1;tgt_f=1;tgt_g=2 end @test apex(a) == @acset S.crel begin A=1;B=2; end a = merge(S,J,[a1,a2,a3]) # Merging overlapping merges J = create_premodel(S,Dict(:A=>5,:B=>5)) md = Dict([:A=>[[2,3,5]], :B=>[[1,5]]]) J_ = deepcopy(J) m = Merge(S, J_, md); J_ = deepcopy(J) md1 = Dict([:A=>[[2,3]]]) m1= Merge(S, J_, md1); J_ = deepcopy(J) md2 = Dict([:A=>[[3,5]]]) m2= Merge(S, J_, md2); J_ = deepcopy(J) md3 = Dict([:B=>[[1,5]]]) m3= Merge(S, J_, md3); res_m = merge(S,J_, [m1,m2,m3]) end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
6103
module TestPropagate # using Revise using Test using CombinatorialEnumeration using DataStructures using CombinatorialEnumeration.Models: EQ, test_premodel # Sketches ########## # default test case include(joinpath(@__DIR__, "TestSketch.jl")); # Test model #----------- J0model = @acset S.cset begin A=3;B=3;E=3;C=3;I=1; a=1;b=1;f=[1,2,3];g=[1,2,3];c=[1,2,3];e=[1,2,3]; end J0 = test_premodel(S,J0model,freeze=[:B]) # Test function propagation #-------------------------- # Adding a (disjoint) aₙ -f,g-> bₙ # This should add a new equalizer cone R = @acset(S.crel, begin A=1;B=1;f=1;g=1;src_f=1;tgt_f=1;src_g=1;tgt_g=1 end) ad = Addition(S, J0, R) :: Change J0_ = deepcopy(J0); m = exec_change(S,J0.model, ad) J0_.model = codom(m) mch, ach = propagate!(S, J0_, ad, m) @test is_no_op(mch) @test codom(ach.l) == @acset S.crel begin A=1;E=1;e=2;src_e=1;tgt_e=1 end # merge cone apexes me = Merge(S, deepcopy(J0), Dict([:E=>[[1,2]]])) J0_ = deepcopy(J0); m = exec_change(S,J0.model, me) J0_.model = codom(m) mch, ach = propagate!(S, J0_, me, m) @test is_no_op(ach) @test codom(mch.l) == @acset S.crel begin A=1 end @test apex(mch) == @acset S.crel begin A=2 end # merge A1 and A2: should induce merge of B1 and B2 as well as E1 and E2 J0 = test_premodel(S,J0model) # nothing frozen me = Merge(S, deepcopy(J0), Dict([:A=>[[1,2]]])) J0_ = deepcopy(J0) m = exec_change(S,J0.model, me) J0_.model = codom(m) mch, ach = propagate!(S, J0_, me,m) @test is_no_op(ach) @test all(v->nparts(codom(mch.l),v)==1 && nparts(apex(mch),v)==2, [:B,:E]) # Test path eq propagation #------------------------- Jpth_model = @acset S.crel begin A=3; B=3; I=1 end Jpth = test_premodel(S,Jpth_model,freeze=[:A,:B]) adpth_ia = add_fk(S, Jpth, :a, 1, 1) Jp = deepcopy(Jpth) m = exec_change(S,Jpth.model, adpth_ia) Jp.model = codom(m) mch, ach = propagate!(S,Jp,adpth_ia,m) @test is_no_op(mch) # path_eqs are changed, but nothing we can do yet @test is_no_op(ach) # path_eqs are changed, but nothing we can do yet @test Jpth.aux.path_eqs[:I] == [[[1],[1,2,3],[1,2,3]]] # before @test Jp.aux.path_eqs[:I] == [[[1],[1,2,3],[1]]] # after ads = merge(S,Jp, [ add_fk(S, Jp, :f, i, j) for (i,j) in [1=>2, 2=>3, 3=>1]]) Jp_ = deepcopy(Jp) m = exec_change(S,Jp.model, ads) Jp_.model = codom(m) mch, ach = propagate!(S, Jp_, ads,m) @test is_no_op(mch) # we infer that I->B must be 1. expect = @acset S.crel begin I=1;A=3;B=3;f=3;a=1;b=1; src_a=1;tgt_a=1;src_b=1;tgt_b=1; src_f=[1,2,3]; tgt_f=[1,2,3]end @test is_isomorphic(codom(exec_change(S,Jp_.model,ach)), expect) # Test backwards inference given a frozen "f" Jpth = test_premodel(S,Jpth_model,freeze=[:A,:B]) adpth_ib = add_fk(S, Jpth, :b, 1, 1) Jp = deepcopy(Jpth) m = exec_change(S,Jp.model, adpth_ib) Jp.model = codom(m) mch, ach = propagate!(S,Jp,adpth_ib,m) ads = merge(S,Jp, [ add_fk(S, Jp, :f, i, j) for (i,j) in [1=>2,2=>3,3=>1]]) Jp_ = deepcopy(Jp) m = exec_change(S,Jp.model, ads) Jp_.model = codom(m) mch, ach = propagate!(S,Jp_,ads,m) @test is_no_op(mch) @test is_isomorphic(codom(exec_change(S,Jp_.model,ach)), expect) # Pullback tests ################ """pullback sketch (to test limits) π₁ D - - > B | | π₂ | | g v v A --> C f """ pbschema = @acset LabeledGraph begin V=4; E=4; vlabel=[:A,:B,:C,:D]; elabel=[:f,:g,:π₁,:π₂]; src=[1,2,4,4];tgt=[3,3,1,2] end csp = @acset LabeledGraph begin V=3; E=2; vlabel=[:A,:B,:C]; elabel=[:f,:g]; src=[1,2]; tgt=[3,3] end PB = Sketch(:PB, pbschema; cones=[Cone(csp,:D,[1=>:π₁,2=>:π₂])]) # Initial data #------------- PBmodel = @acset PB.cset begin A=3;B=3;C=3;D=3;f=[1,1,3];g=[1,2,3];π₁=[1,2,3]; π₂=[1,1,3] end PB0 = test_premodel(PB, PBmodel) # Changes #-------- # Merging pb diagram elems PB0_ = deepcopy(PB0); me_PBC = Merge(PB, PB0_, Dict([:C=>[[2,3]]])) # Merging pb apex elems PB0_ = deepcopy(PB0); me_PBD = Merge(PB, PB0_, Dict([:D=>[[2,3]]])) # Pushout tests ############### # pushout sketch (to test colimits) PO = dual(PB) POmodel = @acset PO.cset begin A=3; B=3; C=3; D=3; f=[1,1,3]; g=[1,2,3]; π₁=[1,2,3]; π₂=[1,1,3] end PO0 = test_premodel(PO,POmodel) # merge two elements in the diagram leg #---------------------------------- PO0_ = deepcopy(PO0); me_POA = Merge(PO, PO0_, Dict([:A=>[[2,3]]])) PO0_ = deepcopy(PO0); m = exec_change(PO,PO0.model, me_POA) PO0_.model = codom(m) mch, ach = propagate!(PO,PO0_,me_POA,m) @test is_no_op(ach) # there are two changes that result. We quotient D via functionality of π₁. We # also quotient D because π₁ is a cocone leg and there are multiple apex # elements that are pointed to by the same connected component @test nparts(apex(mch), :D) == 2 @test num_groups(PO0_.aux.cocones[1][1]) == 2 # merge two elements in the diagram apex #--------------------------------------- PO0_ = deepcopy(PO0); me_POC = Merge(PO, PO0_, Dict([:C=>[[2,3]]])) PO0_ = deepcopy(PO0); m = exec_change(PO,PO0.model, me_POC) PO0_.model = codom(m) mch, ach = propagate!(PO,PO0_,me_POC,m) @test num_groups(PO0_.aux.cocones[1][1]) == 2 @test nparts(apex(mch), :D) == 2 @test nparts(apex(mch), :A) == 2 @test nparts(apex(mch), :B) == 2 # Add a FK which makes it impossible for two groups to be connected #----------------------------------------------------------------------- PO1model = @acset PO.cset begin A=1;B=2;C=1;D=1;π₁=[1];π₂=[1,1] end PO1 = test_premodel(PO,PO1model,freeze=[:A,:B,:C]) ad = add_fk(PO,PO1,:f,1,1) PO1_ = deepcopy(PO1) m = exec_change(PO,PO1.model, ad) PO1_.model = codom(m) @test_throws(ModelException,propagate!(PO,PO1_,ad,m)) # Add a FK which leaves it possible for two groups to be connected #----------------------------------------------------------------------- ad_extraC = deepcopy(ad) adL = deepcopy(codom(ad_extraC.l)); add_part!(adL, :C) ad_extraC = Addition(PO, PO1, homomorphism(apex(ad), adL;monic=true), ad.r) PO1_ = deepcopy(PO1) m = exec_change(PO,PO1.model, ad_extraC) PO1_.model = codom(m) mch, ach = propagate!(PO,PO1_,ad_extraC,m) @test is_no_op(mch) @test !is_no_op(ach) # @test nparts(codom(ach.l), :f) == 2 # different answer when eval'd in REPL??? end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
546
module TestSketches # using Revise using Test using CombinatorialEnumeration using Catlab.CategoricalAlgebra include(joinpath(@__DIR__, "TestSketch.jl")); @test elabel(S.cones[1]) == [:f,:g] @test src(S, :f) == :A @test tgt(S, :f) == :B @test hom_set(S, :A,:B) == [:f,:g] @test hom_set(S, [:I,:Z],[:A]) == [:a,:z] @test hom_in(S,:A) == [:e,:z,:a] @test isempty(hom_set(S,:A,:A)) @test dual(dual(S), :test) == S @test sketch_from_json(to_json(S)) == S @test sizes(S,S.crel|>terminal|>apex) == "A: 1, B: 1, C: 1, E: 1, Z: 1, I: 1" end # module
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
991
# using Revise using CombinatorialEnumeration using Catlab.CategoricalAlgebra """ Example sketch with a path equation, equalizer, coequalizer, 0, and 1 object. e f&g c E ↪ A ⇉ B ↠ C z↑ ↖a↑b Z 1 """ schema = @acset LabeledGraph begin V=6; E=6+7; refl=1:6; vlabel=[:A,:B,:C,:E,:Z,:I] elabel=[:idA,:idB,:idC,:idE,:idZ,:idI,:f,:g,:c,:e,:z,:a,:b] src =[1, 2, 3, 4, 5, 6, 1, 1, 2, 4, 5, 6, 6] tgt =[1, 2, 3, 4, 5, 6, 2, 2, 3, 1, 1, 1, 2] end eqs = [[[:b], [:a,:f]]] cone_g = @acset LabeledGraph begin V=3; E=3+2; refl=1:3; vlabel=[:A,:A,:B]; elabel=[:idA,:idA,:idB,:f,:g]; src=[1,2,3,1,2]; tgt=[1,2,3,3,3] end cones = [Cone(cone_g, :E, [1=>:e, 2=>:e]), Cone(:I)] cocone_g = @acset LabeledGraph begin V=3; E=3+2; refl=1:3; vlabel=[:A,:B,:B]; elabel=[:idA,:idB,:idB,:f,:g]; src=[1,2,3,1,1]; tgt=[1,2,3,2,3] end cocones = [Cone(cocone_g, :C, [2=>:c, 3=>:c]), Cone(:Z)] S = Sketch(:test, schema; cones=cones, cocones=cocones, eqs=eqs)
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
340
using Pkg, Coverage bashit(str) = run(`bash -c "$str"`) bashit(""" find . -name '*.jl' -print0 | xargs -0 sed -i "" "s/^using Revise/# using Revise/g" """) Pkg.test("CombinatorialEnumeration"; coverage=true) coverage = process_folder() open("lcov.info", "w") do io LCOV.write(io, coverage) end; bashit("find . -name *.cov -delete")
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
code
542
using Test using CombinatorialEnumeration @testset "Sketches" begin include("Sketches.jl") end @testset "Models" begin include("Models.jl") end @testset "DB" begin include("DB.jl") end @testset "Propagate" begin include("Propagate.jl") end @testset "ModEnum" begin include("ModEnum.jl") end for ex in filter(f->f[end-2:end]==".jl",readdir("$(pkgdir(CombinatorialEnumeration))/data")) @testset "$ex" begin println("$ex") include(joinpath(@__DIR__, "$(pkgdir(CombinatorialEnumeration))/data/$ex")).runtests() end end
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
docs
1308
# ![CombinatorialEnumeration.jl](docs/src/assets/logo.png) CombinatorialEnumeration.jl [![Documentation](https://github.com/kris-brown/CombinatorialEnumeration.jl/workflows/Documentation/badge.svg)](https://kris-brown.github.io/CombinatorialEnumeration.jl/dev/) ![Tests](https://github.com/kris-brown/CombinatorialEnumeration.jl/workflows/Tests/badge.svg) This package implements a constrained search algorithm, with constraints specified in the language of [sketches](https://www.math.mcgill.ca/barr/papers/sketch.pdf) / category theory. Formally, given a finite (co)- limit sketch, we enumerate its models _up to isomorphism_. See more in the [documentation](https://kris-brown.github.io/CombinatorialEnumeration.jl/dev/) (also found [here](https://github.com/kris-brown/CombinatorialEnumeration.jl/blob/main/docs/src/index.md), if GitHub pages isn't working), and some examples are in the top-level `data/` directory. ## Status This is very experimental code, so there may be frequent breaking changes. There is great opportunity for massive speed-ups - really the most basic implementations to get something running is all that is written so far, but done so in a modular way (e.g. enforcing cone constraints, enforcing cocone constraints) so that bottlenecks can be identified and improved piecemeal.
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
docs
342
These files contain examples of sketches. They will share some common structure and use the same names for these common structures. - `S`: the sketch - `runtests()`: a function which throws an error if CombinatorialEnumeration is not giving expected results - `to_model`/`from_model`: interconvert between models and some Julia data structure
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.1.0
72cbf3b82ee037b0e57087394ecdff07760a100b
docs
7114
# CombinatorialEnumeration.jl ## Motivating example Suppose you are given a formally specified theory, for example the theory of (small) [categories](https://www.math3ma.com/blog/what-is-a-category), which says that a *category* `C` is specified by: - A set of *objects*, `Ob(C)` - For each pair of objects `a,b ∈ Ob(C)`, a set of arrows `Hom(a,b)`. - A composition operator that gives an arrow in `f⋅g ∈ Hom(a,c)` for each pair of arrows `f ∈ Hom(a,b)` and `g ∈ Hom(b,c)`. - An identity arrow `id(a) ∈ Hom(a,a)` for each object `a ∈ Ob(C)` - Furthermore, this data must satisfy some constraints: - Unitality: `id(a)⋅f = f = f⋅id(b)` for each `f ∈ Hom(a,b)` - Associativity: `(f⋅g)⋅h = f⋅(g⋅h)` for each triple of composable arrows. Even if each individual piece of data or constraint in this definition is straightforward, definitions might seem overwhelming at first insofar as we come across the following types of problems: - What are the 5 simplest categories? - Given this proposed category, is it actually a category? - Are there any categories (bounded by some max size) such that some property `ϕ` holds? There is pedagogical value in working through these types of problems in one's head, but there is also value in having these answers automatically ready at hand when trying to think about / build intuition for more complicated concepts. There is something mechanical about this process, and the purpose of this repo is to mechanize precisely that in an efficient way that's also usable for people trying to build their intuitions. ## Motivation There are lots of constraint solvers which can find models. For example, SMT can generate a model `M` (or tell you one doesn't exist), and then you can add another axiom to rule out `M` and try again until you've enumerated all models. While SMT's first-order logic is an appealing modeling language due to its flexibility (usually, one can encode easily one's domain in this universal language), it's a bit awkward to detail with certain types of data. In particular, combinatorial data (here, meaning collections of finite sets with finite maps between them that satisfy certain properties, e.g. bipartite graphs). This repo explores another corner of the design space, motivated by the idea of sketches from category theory. These have been argued to be a good [framework for knowledge management](https://www.nasa.gov/sites/default/files/ivv_wojtowicz_sketch_theory_as_a_framework_for_knowledge_management_090214.pdf) because the category theory behind sketches allows to automatically reason about the relationships between different pieces of knowledge without requiring reasoning about arbitrary first-order logic, which is notoriously difficult. Sketch-constraint solving is like a subset of general constraint programming because there are only a few special types of constraints that need be enforced. The downside is that may have to think how to represent their domain as a sketch, rather than using arbitrary first-order logic or code. There are at least a few upsides: - the solver has potential to be very efficient for the few types of constraints sketches require - reasoning about combinatorial data, rather than logical formulas, allows us to work [up to isomorphism](https://github.com/AlgebraicJulia/CSetAutomorphisms.jl) incrementally throughout the entire model exploration process, rather than quotienting our results at the end. - Sketches can be constructed compositionally, and, moreover, there are clean relationships between `Mod(A+B)` (i.e. the models of some sketch that is related in some way to `A` and `B`) to `Mod(A)` and `Mod(B)`. ### Aside: Notes on categories of sketch models From "Toposes, Triples and Theories" (Barr and Wells) - Theorem 4.3: Every FP-theory has an extension to an LE-theory which has the same models in any LE-category. - Theorem 4.4 : The category of set-valued models of a left exact theory has arbitrary limits and all filtered colimits; moreover, these are preserved by the set-valued functors of evaluation at the objects of the theory. - Theorem 4.1: (outlines which kinds of sketches have which kinds of (co)limits) ## Models For our purposes, a *model* is an instance of a relational database, i.e. a collection of tables with foreign keys between them. Normally, one can stick whatever data one wants into the tables of a database. Suppose our schema is E⇉V. If we enumerate models on this schema, we will obtain all directed multi-graphs. Here are the first few: [todo] However, we might wish to enumerate the smallest groups: [todo] There is no correspondence between groups and databases. At best, every group can be represented by a certain database instance, but only a very select few database instances on that schema are actually groups. If we tried to enumerate the databases that might be groups of order 10 and then filter those which are actually groups, we would have to enumerate 10^... This isn't feasible, so we need to incorporate our constraints into the search process. The language of finite limit sketches allows us to say how we wish to restrict which databases are valid models. ## Finite (co)limit sketches A sketch contains a schema, just like a relational database. However, it contains three kinds of extra data which constrain models. ### Path Equations We can assert that sequences of foreign keys in a database must yield the same result. An example of this is the case of reflexive graphs. We add to our schema a designated `refl` edge for each vertex. The equalities: - `refl; src = idᵥ` - `refl; tgt = idᵥ` Capture the fact that a database with a vertex whose reflexive edge starts or ends at a different vertex is *not* a valid model. ### Cone objects A sketch can designated a particular object to satisfy the *cone* constraint. This constraint says that the elements of that table are in bijection with matches of some pattern found elsewhere in the database. A pullback is the classic example of this: we want to identify pairs that agree on a certain value. For example, a database might have: [CTS type example] To enforce that the _ table actually contains the intended content, we assert it is in bijection with the following pattern. [todo] There are many more examples that can show off the versatility of cone constraints. ### Cocone objects The last type of constraint available is that of cocones. A cocone object is asserted to be in bijection with certain equivalence classes (i.e. partitions, quotients) of the objects in a diagram. A pushout is the classic example of this: we wish to glue together two tables in our database along a common boundary. [example] ## Compositionality This peculiar language of constraints has an advantage that comes from the fact that sketches can be related to each other (there is a *category* of sketches). This means that, for example, gluing sketches together can be a meaningful operation - if we have computed the models for the individual components, then a very efficient operation can construct the composite models, rather than starting from scratch.
CombinatorialEnumeration
https://github.com/kris-brown/CombinatorialEnumeration.jl.git
[ "MIT" ]
0.0.2
bc656ab47a567aa9de59e9e215d69dc1c91ae313
code
1826
module TotalVariation using DSP, SparseArrays, LinearAlgebra export gstv, tv #See ``Total Variation Denoising With Overlapping Group Sparsity'' by # Ivan Selesnick and Po-Yu Chen (2013) #Group Sparse Total Variation Denoising function gstv(y::Vector{Float64}, k::Int, λ::Float64; show_cost::Bool=false, iter::Int=100) #Initialize Solution N = length(y) x = copy(y) #Differential of input b = diff(y) #Precalculate D D' where D is the first-order difference matrix DD::SparseMatrixCSC{Float64,Int} = spdiagm(-1=>-ones(N-2), 0=>2*ones(N-1), 1=>-ones(N-2)) #Value To Prevent Singular Matrices epsilon = 1e-15 #Convolution Mask - spreads D x over a larger area #This regularizes the problem with applying a gradient to a larger area. #at k=1 the normal total variational sparse solution (for D x) is found. h = ones(k) for i=1:iter u::Vector{Float64} = diff(x) r::Vector{Float64} = sqrt.(max.(epsilon, DSP.conv(u.^2,h))) Λ::Vector{Float64} = DSP.conv(1 ./ r, h)[k:end-(k-1)] F::SparseMatrixCSC{Float64,Int} = sparse(Diagonal(1 ./ Λ))/λ + DD if(show_cost) #1/2||y-x||_2^2 + λΦ(Dx) #Where Φ(.) is the group sparse regularizer println("Cost at iter ",i," is ", 0.5*sum(abs2.(x.-y)) + λ*sum(r)) end tmp::Vector{Float64} = F\b dfb::Vector{Float64} = diff(tmp) x[1] = y[1] + tmp[1] x[2:end-1] = y[2:end-1] + dfb[:] x[end] = y[end] - tmp[end] end return x end #Normal Total Variation Problem function tv(y::Vector{Float64}, λ::Float64; show_cost::Bool=false, iter::Int=100) gstv(y, 1, λ, show_cost=show_cost, iter=iter) end # package code goes here end # module
TotalVariation
https://github.com/fundamental/TotalVariation.jl.git
[ "MIT" ]
0.0.2
bc656ab47a567aa9de59e9e215d69dc1c91ae313
code
826
include("TotalVariation.jl") using TotalVariation using PyPlot ground_truth = vcat(ones(1000), -10sin.(linspace(0,pi,400)), 1ones(950)) noise = 10randn(length(ground_truth)) combined = ground_truth .+ noise gstv_out = TotalVariation.gstv(combined, 40, 15.0) tv_out = TotalVariation.tv(combined, 200.0) figure(1) PyPlot.clf(); subplot(2,2,1) title("original signal") plot(ground_truth) subplot(2,2,2) title("original + noise") plot(combined) subplot(2,2,3) title("tv correction") plot(tv_out) plot(ground_truth, color="g") subplot(2,2,4) title("gstv correction") plot(gstv_out) plot(ground_truth, color="g") rms(s) = sqrt(mean((ground_truth.-s).^2)) println("Initial Error = ", rms(combined)) println("Output Error [tv] = ", rms(tv_out)) println("Output Error [gstv] = ", rms(gstv_out))
TotalVariation
https://github.com/fundamental/TotalVariation.jl.git
[ "MIT" ]
0.0.2
bc656ab47a567aa9de59e9e215d69dc1c91ae313
code
475
using Test using Random using Statistics using TotalVariation #Initial signals Random.seed!(0) g_truth = [ones(100); 5*ones(200); -10*ones(100)] noise = randn(size(g_truth)) mixed = g_truth .+ noise #Denoising operation denoised = tv(mixed, 10.0) #Results noise_before = mean((mixed .-g_truth).^2) noise_after = mean((denoised.-g_truth).^2) println("Noise before = ", noise_before) println("Noise after = ", noise_after) @test noise_before > noise_after
TotalVariation
https://github.com/fundamental/TotalVariation.jl.git
[ "MIT" ]
0.0.2
bc656ab47a567aa9de59e9e215d69dc1c91ae313
docs
803
# TotalVariation An implementation of Total Variation Denoising and Group Sparse Total Variation Denoising. [![Build Status](https://travis-ci.org/fundamental/TotalVariation.jl.png)](https://travis-ci.org/fundamental/TotalVariation.jl) Total Variation (TV) minimization uses the TV norm to reduce excess variation in 1D signals. Using TV for denoising will result in a piecewise constant function with fewer pieces at higher levels of denoising. Group sparse TV is an extension on the TV norm which models signals which have several localized transitions. Larger group sizes help model smoother signals with slow transitions. For more information see src/example.jl and the source publication: ``Total Variation Denoising With Overlapping Group Sparsity'' by Ivan Selesnick and Po-Yu Chen (2013)
TotalVariation
https://github.com/fundamental/TotalVariation.jl.git
[ "MIT" ]
0.4.1
f8844cb81b0c3a2d5c96c1387abebe61f18e619e
code
5801
module TableIO export read_table, write_table!, read_sql, list_tables using TableIOInterface using Tables, Requires using DataFrames # required for multiple file types, therefore currently not optional # specify if a reader accepts an io buffer as input or if creation of a temp file is required supports_io_input(::TableIOInterface.AbstractFormat) = false supports_io_input(::TableIOInterface.CSVFormat) = true supports_io_input(::TableIOInterface.JSONFormat) = true supports_io_input(::TableIOInterface.ArrowFormat) = true # definition of the required packages for the specific formats # if a format requires multiple packages, define them as a list const PACKAGE_REQUIREMENTS = Dict{DataType, Union{Symbol, Vector{Symbol}}}( TableIOInterface.CSVFormat => :CSV, TableIOInterface.ZippedFormat => :ZipFile, TableIOInterface.JDFFormat => :JDF, TableIOInterface.ParquetFormat => :Parquet, TableIOInterface.ExcelFormat => :XLSX, TableIOInterface.SQLiteFormat => :SQLite, TableIOInterface.StataFormat => :StatFiles, TableIOInterface.SPSSFormat => :StatFiles, TableIOInterface.SASFormat => :StatFiles, TableIOInterface.JSONFormat => :JSONTables, TableIOInterface.ArrowFormat => :Arrow, TableIOInterface.PostgresFormat => [:LibPQ, :CSV], TableIOInterface.HDF5Format => :Pandas, TableIOInterface.JLD2Format => :JLD2, ) ## Dispatching on file extensions """ read_table(filename:: AbstractString; kwargs...) `filename`: path and filename of the input file `kwargs...`: keyword arguments passed to the underlying file reading function (e.g. `CSV.File`) Returns a Tables.jl interface compatible object. Example: df = DataFrame(read_table("my_data.csv"); copycols=false) """ function read_table(filename:: AbstractString, args...; kwargs...) data_type = get_file_type(filename) return read_table(data_type, filename, args...; kwargs...) end """ read_table(file_picker:: Dict, args...; kwargs...) Reading tabular data from a PlutoUI.jl FilePicker. Usage (in a Pluto.jl notebook): using PlutoUI, TableIO, DataFrames using XLSX # import the packages required for the uploaded file formats @bind f PlutoUI.FilePicker() df = DataFrame(read_table(f); copycols=false) """ function read_table(file_picker:: Dict, args...; kwargs...) filename, data = _get_file_picker_data(file_picker) data_type = get_file_type(filename) data_buffer = IOBuffer(data) if supports_io_input(data_type) data_object = data_buffer # if it is supported by the corresponding package, creation of a temporary file is avoided and the IOBuffer is used directly else tmp_file = joinpath(mktempdir(), filename) write(tmp_file, data_buffer) data_object = tmp_file end return read_table(data_type, data_object, args...; kwargs...) end """ list_tables(filename:: AbstractString) Returns a list of all tables inside a file. """ function list_tables(filename:: AbstractString) data_type = get_file_type(filename) TableIOInterface.multiple_tables(data_type) || error("The data type $data_type does not support multiple tables per file.") return list_tables(data_type, filename) end """ write_table!(filename:: AbstractString, table; kwargs...):: AbstractString `filename`: path and filename of the output file `table`: a Tables.jl compatible object (e.g. a DataFrame) for storage `kwargs...`: keyword arguments passed to the underlying file writing function (e.g. `CSV.write`) Example: write_table!("my_output.csv", df) """ function write_table!(filename:: AbstractString, table, args...; kwargs...) data_type = get_file_type(filename) write_table!(data_type, filename, table, args...; kwargs...) nothing end """ read_sql(db, sql:: AbstractString) Returns the result of the SQL query as a Tables.jl compatible object. """ function read_sql end include("julia.jl") ## conditional dependencies function __init__() @require CSV = "336ed68f-0bac-5ca0-87d4-7b16caf5d00b" begin include("csv.jl") @require LibPQ = "194296ae-ab2e-5f79-8cd4-7183a0a5a0d1" include("postgresql.jl") end @require ZipFile = "a5390f91-8eb1-5f08-bee0-b1d1ffed6cea" include("zip.jl") @require JDF = "babc3d20-cd49-4f60-a736-a8f9c08892d3" include("jdf.jl") @require Parquet = "626c502c-15b0-58ad-a749-f091afb673ae" include("parquet.jl") @require XLSX = "fdbf4ff8-1666-58a4-91e7-1b58723a45e0" include("xlsx.jl") @require StatFiles = "1463e38c-9381-5320-bcd4-4134955f093a" include("stat_files.jl") @require SQLite = "0aa819cd-b072-5ff4-a722-6bc24af294d9" include("sqlite.jl") @require JSONTables = "b9914132-a727-11e9-1322-f18e41205b0b" include("json.jl") @require Arrow = "69666777-d1a9-59fb-9406-91d4454c9d45" include("arrow.jl") @require Pandas = "eadc2687-ae89-51f9-a5d9-86b5a6373a9c" include("pandas.jl") @require JLD2 = "033835bb-8acc-5ee8-8aae-3f567f8a3819" include("jld2.jl") end ## Utilities get_package_requirements(::T) where {T <: TableIOInterface.AbstractFormat} = PACKAGE_REQUIREMENTS[T] get_package_requirements(filename:: AbstractString) = get_package_requirements(get_file_type(filename)) _checktable(table) = Tables.istable(typeof(table)) || error("table has no Tables.jl compatible interface") # poor man's approach to prevent SQL injections / garbage inputs _checktablename(tablename) = match(r"^[a-zA-Z0-9_.]*$", tablename) === nothing && error("tablename must only contain alphanumeric characters and underscores") function _get_file_picker_data(file_picker:: Dict) data = file_picker["data"]:: Vector{UInt8} # brings back type stability length(data) == 0 && error("no file selected yet") filename = file_picker["name"]:: String return filename, data end end
TableIO
https://github.com/lungben/TableIO.jl.git
[ "MIT" ]
0.4.1
f8844cb81b0c3a2d5c96c1387abebe61f18e619e
code
434
## Apache Arrow @info "Arrow.jl is available - including functionality to read / write JDF files" using .Arrow function read_table(::TableIOInterface.ArrowFormat, filename:: Union{AbstractString, IO}; kwargs...) return Arrow.Table(filename; kwargs...) end function write_table!(::TableIOInterface.ArrowFormat, filename:: Union{AbstractString, IO}, table; kwargs...) Arrow.write(filename, table; kwargs...) nothing end
TableIO
https://github.com/lungben/TableIO.jl.git