licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 0.1.14 | 20a56bc6d7fc2ffbea278d1f47fdfda594a01b46 | docs | 7213 | # How to... (FAQ)
## Build your own `LazyOperator`
Imagine that you want some kind of function (~operator) that has a different behavior depending on the cell (or face) it is applied to. The `PhysicalFunction` won't do the job since it is assumed that the provided function applies the same way in all the different cells. What you want is a `LazyOperator`. Here is how to build a custom one.
For the example, let's say that you want an operator whose action is to multiply `x`, the evaluated point, by the index of the cell surrounding `x`. Start importing some Bcube material and by declaring a type corresponding to this operator:
```julia
using Bcube
import Bcube: CellInfo, CellPoint, get_coords
struct DummyOperator <: Bcube.AbstractLazy end
```
Then, specify what happens when `Bcube` asks for the restriction of your operator in a given cell. This is done before applying it to any point. In most case, you don't want to do anything special, so just return the operator itself:
```julia
Bcube.materialize(op::DummyOperator, ::CellInfo) = op
```
Now, specify what to return when `Bcube` wants to apply this operator on a given point in a cell. As said earlier, we want it the return the point, multiplied by the cell index (but it could be anything you want):
```julia
function Bcube.materialize(
::DummyOperator,
cPoint::CellPoint,
)
x = get_coords(cPoint)
cInfo = Bcube.get_cellinfo(cPoint)
index = Bcube.cellindex(cInfo)
return x * index
end
```
That's it! To see your operator in action, take a look at the related [section](@ref Evaluate-a-LazyOperator-on-a-specific-point).
In this short example, note that we restricted ourselves to `CellPoint` : the `DummyOperator` won't be applicable to a face. To do so, you have to specialize the materialization on a `Side` of a `FaceInfo` and on a `Side` of a `FacePoint`. Checkout the source code for `TangentialProjector` to see this in action. Besides, the `CellPoint` is parametrized by a `DomainStyle`, allowing to specify different behavior depending on if your operator is applied to a point in the `ReferenceDomain` or in the `PhysicalDomain`.
## Evaluate a `LazyOperator` on a specific point
Suppose that you have built a mesh and defined a `LazyOperator` on this mesh and you want, for debug purpose, evaluate this operator on a point of your choice. First, let's define our example operator:
```julia
using Bcube
mesh = circle_mesh(10)
op = Bcube.TangentialProjector()
```
Then, let's define the point where we want to evaluate this operator. For this, we need to create a so-called `CellPoint`. It's structure is quite basic : it needs the coordinates, the mesh cell owning these coordinates, and if the coordinates are given in the `ReferenceDomain` or in the `PhysicalDomain`. Here, we will select the first cell of the mesh, and choose the coordinates `[0.5]` (recall that we are in 1D, hence this vector of one component):
```julia
cInfo = Bcube.CellInfo(mesh, 1)
cPoint = Bcube.CellPoint([0.5], cInfo, Bcube.ReferenceDomain())
```
Now, they are always two steps to evaluate a `LazyOperator`. First we need to materialize it on a cell (or a face) and then to evaluate it on a cell-point (or face-point). The materialization on a cell does not necessarily triggers something, it depends on the operator. For instance, an analytic function will not have a specific behaviour depending on the cell; however a shape function will.
```julia
op_cell = Bcube.materialize(op, cInfo)
```
Finally, we can apply our operator on the cell point defined above and observe the result. It is also called a "materialization":
```julia
@show Bcube.materialize(op_cell, cPoint)
```
Note that before and after the materialization on a cell point, the operator can be displayed as a tree with
```julia
Bcube.show_lazy_operator(op)
Bcube.show_lazy_operator(op_cell)
```
## Get the coordinates of Lagrange dofs
For a **Lagrange** "uniform" function space, the dofs corresponds to vertices. The following `lagrange_dof_to_coords` function returns a matrix : each line contains the coordinates of the dof corresponding to the line number.
```julia
function lagrange_dof_to_coords(mesh, degree)
U = TrialFESpace(FunctionSpace(:Lagrange, degree), mesh)
coords = map(1:Bcube.spacedim(mesh)) do i
f = PhysicalFunction(x -> x[i])
u = FEFunction(U)
projection_l2!(u, f, mesh)
return get_dof_values(u)
end
return hcat(coords...)
end
```
For instance:
```julia
using Bcube
mesh = rectangle_mesh(2, 3; xmin = 1, xmax = 2, ymin = 3, ymax = 5)
coords = lagrange_dof_to_coords(mesh, 1)
@show coords[2] # coordinates of dof '2' in the global numbering
```
## Comparing manually the benchmarks with `main`
Let's say you want to compare the performance of your current branch (named "target" hereafter) with the `main` branch (named "baseline" hereafter).
Open from `Bcube.jl/` a REPL and type:
```julia
pkg> activate --temp
pkg> add BenchmarkTools PkgBenchmark StaticArrays WriteVTK UnPack
pkg> dev .
using PkgBenchmark
import Bcube
benchmarkpkg(Bcube, BenchmarkConfig(; env = Dict("JULIA_NUM_THREADS" => "1")); resultfile = joinpath(@__DIR__, "result-target.json"))
```
This will create a `result-target.json` in the current directory.
Then checkout the `main` branch. Start a fresh REPL and type (almost the same):
```julia
pkg> activate --temp
pkg> add BenchmarkTools PkgBenchmark StaticArrays WriteVTK UnPack
pkg> dev .
using PkgBenchmark
import Bcube
benchmarkpkg(Bcube, BenchmarkConfig(; env = Dict("JULIA_NUM_THREADS" => "1")); resultfile = joinpath(@__DIR__, "result-baseline.json"))
```
This will create a `result-baseline.json` in the current directory.
You can now "compare" the two files by running (watch-out for the order):
```julia
target = PkgBenchmark.readresults("result-target.json")
baseline = PkgBenchmark.readresults("result-baseline.json")
judgement = judge(target, baseline)
export_markdown("judgement.md", judgement)
```
This will create the markdown file `judgement.md` with the results.
For more details, once you've built the `judgement` object, you can also type the following code from `https://github.com/tkf/BenchmarkCI.jl`:
```julia
open("detailed-judgement.md", "w") do io
println(io, "# Judge result")
export_markdown(io, judgement)
println(io)
println(io)
println(io, "---")
println(io, "# Target result")
export_markdown(io, PkgBenchmark.target_result(judgement))
println(io)
println(io)
println(io, "---")
println(io, "# Baseline result")
export_markdown(io, PkgBenchmark.baseline_result(judgement))
println(io)
println(io)
println(io, "---")
end
```
## Run the benchmark manually
Let's say you want to run the benchmarks locally (without comparing with `main`)
Open from `Bcube.jl/` a REPL and type:
```julia
pkg> activate --temp
pkg> add BenchmarkTools PkgBenchmark StaticArrays WriteVTK UnPack
pkg> dev .
using PkgBenchmark
import Bcube
results = benchmarkpkg(Bcube, BenchmarkConfig(; env = Dict("JULIA_NUM_THREADS" => "1")); resultfile = joinpath(@__DIR__, "result.json"))
export_markdown("results.md", results)
```
This will create the markdown file `results.md` with the results.
| Bcube | https://github.com/bcube-project/Bcube.jl.git |
|
[
"MIT"
] | 0.1.14 | 20a56bc6d7fc2ffbea278d1f47fdfda594a01b46 | docs | 752 | # Cell function
As explained earlier, at least two coordinates systems exist in Bcube : the "reference" coordinates (`ReferenceDomain`) and the "physical" coordinates (`PhysicalDomain`). The evaluation of a function on a point in a cell depends on the way this point has been defined. Hence the definition of `CellPoint`s that embed the coordinate system. Given a `CellPoint` (or eventually a `FacePoint`), an `AbstractCellFunction` will be evaluated and the mapping between the `ReferenceDomain` to the `PhysicalDomain` (or reciprocally) will be performed internally if necessary : if an `AbstractCellFunction` defined in terms of reference coordinates is applied on a `CellPoint` expressed in the reference coordinates system, no mapping is needed.
| Bcube | https://github.com/bcube-project/Bcube.jl.git |
|
[
"MIT"
] | 0.1.14 | 20a56bc6d7fc2ffbea278d1f47fdfda594a01b46 | docs | 524 | # Conventions
This documentation follows the following notation or naming conventions:
- coordinates inside a reference frame are noted $$\hat{x}, \hat{y}$$ or $$\xi, \eta$$ while coordinates in the physical frame are noted $$x,y$$
- when talking about a mapping, $$F$$ or sometimes $$F_{rp}$$ designates the mapping from the reference element to the physical element. On the other side, $$F^{-1}$$ or sometimes $$F_{pr}$$ designates the physical element to the reference element mapping.
- "dof" means "degree of freedom" | Bcube | https://github.com/bcube-project/Bcube.jl.git |
|
[
"MIT"
] | 0.1.14 | 20a56bc6d7fc2ffbea278d1f47fdfda594a01b46 | docs | 2224 | # Function and FE spaces
### `AbstractFunctionSpace`
In Bcube, a `FunctionSpace` is defined by a type (nodal Lagrange polynomials, modal Taylor expansion, etc) and a degree. For each implemented `FunctionSpace`, a list of shape functions is associated on a given `Shape`. For instance, one can get the shape functions associated to the Lagrange polynomials or order 3 on a `Square`. Note that for "tensor" elements such as `Line`, `Square` or `Cube`; the Lagrange polynomials are available at any order; being computed symbolically.
### `AbstractFESpace`
Then, an `FESpace` (more precisely `SingleFESpace`) is a function space associated to a numbering of the degrees of freedom. Note that the numbering may depend on the continuous or discontinuous feature of the space. Hence a `SingleFESpace` takes basically four input to be built : a `FunctionSpace`, the number of components of this space (scalar or vector), an indicator of the continuous/discontinuous characteristic, and the mesh. The dof numbering is built by combining the mesh numberings (nodes, cells, faces) and the function space. Note that the degree of the `FunctionSpace` can differ from the "degree" of the mesh elements : it is possible to build a `SingleFESpace` with P2 polynomials on a mesh only containing straight lines (defined by only two nodes, `Bar2_t`). Optionaly, a `SingleFESpace` can also contain the tags of the boundaries where Dirichlet condition(s) applies.
A `MultiFESpace` is simply a set of `SingleFESpace`, eventually of different natures. Its befenit is that it allows to build a "global" numbering of all the dofs represented by this space. This is especially convenient to solve systems of equations.
### `AbstractFEFunction`
With a `SingleFESpace`, one can build the representation of a function discretized on this space: a `FEFunction`. This structure stores a vector of values, one for each degree of freedom of the finite element space. To set or get the values of a `FEFunction`, the functions `set_dof_values!` and `get_dof_values` are available respectively. A `FEFunction` can be projected on another `FESpace`; or evaluated at some specific mesh location (a coordinates, all the nodes, all the mesh centers, etc).
| Bcube | https://github.com/bcube-project/Bcube.jl.git |
|
[
"MIT"
] | 0.1.14 | 20a56bc6d7fc2ffbea278d1f47fdfda594a01b46 | docs | 1125 | # Geometry and mesh
A `Mesh` is a set basically of nodes (`Node`), a set of entities (the mesh elements) and a list of connectivies that link the entities between themselves and with the nodes.
In Bcube every mesh entity has corresponding reference `Shape`, a simplified or canonical representation of this element. A 1D line is mapped on the `[-1,1]` segment, and a rectangle is mapped on a square for instance. On these reference shapes, (almost) everything is known : the vertices location, the area, the quadrature points... Hence in Bcube we always compute things in the reference shape. For "Lagrange" elements (such as `Bar*_t`, `Tri*_t`, `Quad*_t`, `Tetra*_t`, `Hexa*_t`, `Penta*_t` etc), the mapping from the reference shape to a geometrical element is directly obtained from the corresponding Lagrange polynomials and the element node coordinates. Given a geometrical element with `n` nodes `M_i`, the mapping reads:
```math
F(\xi) = \sum_{i=1}^n \hat{\lambda}_i(\xi)M_i
```
where $(\lambda)_i$ are the Lagrange polynomials whose order matches the element order.

| Bcube | https://github.com/bcube-project/Bcube.jl.git |
|
[
"MIT"
] | 0.1.14 | 20a56bc6d7fc2ffbea278d1f47fdfda594a01b46 | docs | 995 | # Integration
To compute an integral on a geometrical element, for instance a curved element, a variable substitution is used to compute the integral on the corresponding reference `Shape`. This variable substitution reads:
```math
\int_\Omega g(x) \mathrm{\,d} \Omega = \int_{\hat{\Omega}} |J(x)| \left(g \circ F \right)(\hat{x}) \mathrm{\,d} \hat{\Omega},
```
where we recall that $$F$$ is the reference to physical mapping and $$J$$ is the determinant of the jacobian matrix of this mapping. Depending on the shape and element order, this determinant is either hard-coded or computed with `ForwardDiff`.
Now, to compute the right side, i.e the integral on the reference shape, quadrature rules are applied to $\hat{g} = g \circ F$:
```math
\int_{\hat{\Omega}} \hat{g}(\hat{x}) \mathrm{\,d} \hat{\Omega} = \sum_{i =1}^{N_q} \omega_i \hat{g}(\hat{x}_i)
```
A specific procedure is applied to compute integrals on a face of a cell (i.e a surfacic integral on a face of a volumic element).
| Bcube | https://github.com/bcube-project/Bcube.jl.git |
|
[
"MIT"
] | 0.1.14 | 20a56bc6d7fc2ffbea278d1f47fdfda594a01b46 | docs | 34 | # LazyOperators
WORK IN PROGRESS
| Bcube | https://github.com/bcube-project/Bcube.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 620 | using FileIO
using ArgParse
"""
parse_commandline(ARGS)
Parse command line arguments and return argument dictionary
"""
function parse_commandline(ARGS)
s = ArgParseSettings("QXContexts")
@add_arg_table! s begin
"input"
help = "Input file to example"
required = true
arg_type = String
end
return parse_args(ARGS, s)
end
"""
main(ARGS)
QXContexts entry point
"""
function main(ARGS)
args = parse_commandline(ARGS)
input_file = args["input"]
@show load(input_file)
end
if abspath(PROGRAM_FILE) == @__FILE__
main(ARGS)
end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 2968 | # using MPI
using QXContexts
using ArgParse
using TimerOutputs
"""
parse_commandline(ARGS)
Parse command line arguments and return argument dictionary
"""
function parse_commandline(ARGS)
s = ArgParseSettings("QXContexts")
@add_arg_table! s begin
"--dsl", "-d"
help = "DSL file path"
required = true
arg_type = String
"--parameter-file", "-p"
help = "Parameter file path, default is to use dsl filename with .yml suffix"
default = nothing
arg_type = String
"--input-file", "-i"
help = "Input data file path, default is to use dsl filename with .jld2 suffix"
default = nothing
arg_type = String
"--output-file", "-o"
help = "Output data file path"
required = true
arg_type = String
"--number-amplitudes", "-a"
help = "Number of amplitudes to calculate out of number in parameter file"
default = nothing
arg_type = Int
"--number-slices", "-n"
help = "The number of slices to use out of number given in parameter file"
default = nothing
arg_type = Int
"--sub-comm-size", "-s"
help = "The number of ranks to assign to each sub-communicator for partitions"
default = 1
arg_type = Int
"--mpi", "-m"
help = "Use MPI"
action = :store_true
"--gpu", "-g"
help = "Use GPU if available"
action = :store_true
"--timings", "-t"
help = "Enable output of timings with warmup run"
action = :store_true
end
return parse_args(ARGS, s)
end
"""
main(ARGS)
QXContexts entry point
"""
function main(ARGS)
args = parse_commandline(ARGS)
dsl_file = args["dsl"]
input_file = args["input-file"]
param_file = args["parameter-file"]
output_file = args["output-file"]
number_amplitudes = args["number-amplitudes"]
number_slices = args["number-slices"]
sub_comm_size = args["sub-comm-size"]
use_mpi = args["mpi"]
use_gpu = args["gpu"]
timings = args["timings"]
results = execute(dsl_file, input_file, param_file, output_file;
use_mpi=use_mpi, sub_comm_size=sub_comm_size,
use_gpu=use_gpu, max_amplitudes=number_amplitudes,
max_slices=number_slices,
timings=timings)
if timings
reset_timer!()
results = execute(dsl_file, input_file, param_file, output_file;
use_mpi=use_mpi, sub_comm_size=sub_comm_size,
use_gpu=use_gpu, max_amplitudes=number_amplitudes,
max_slices=number_slices,
timings=timings)
end
end
if abspath(PROGRAM_FILE) == @__FILE__
main(ARGS)
end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 526 | using QXContexts
using Documenter
makedocs(;
modules=[QXContexts],
authors="QuantEx team",
repo="https://github.com/JuliaQX/QXContexts.jl/blob/{commit}{path}#L{line}",
sitename="QXContexts.jl",
format=Documenter.HTML(;
prettyurls=get(ENV, "CI", "false") == "true",
canonical="https://JuliaQX.github.io/QXContexts.jl",
assets=String[],
),
pages=[
"Home" => "index.md",
"LICENSE" => "license.md"
],
)
deploydocs(;
repo="github.com/JuliaQX/QXContexts.jl",
)
| QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 1526 | using Logging
using ArgParse
using MPI
using QXContexts
"""
main(ARGS)
QXContexts entry point
"""
function main(args)
s = ArgParseSettings("QXContexts")
@add_arg_table! s begin
"--sub-comm-size", "-s"
help = "The number of ranks to assign to each sub-communicator for partitions"
default = 1
arg_type = Int
"--mpi", "-m"
help = "Use MPI"
action = :store_true
"--gpu", "-g"
help = "Use GPU if available"
action = :store_true
"--verbose", "-v"
help = "Enable logging"
action = :store_true
end
parsed_args = parse_args(args, s)
if parsed_args["verbose"]
if parsed_args["mpi"]
if !MPI.Initialized() MPI.Init() end
global_logger(QXContexts.Logger.QXLoggerMPIPerRank())
else
global_logger(QXContexts.Logger.QXLogger())
end
end
file_path = @__DIR__
dsl_file = joinpath(file_path, "ghz/ghz_5.qx")
param_file = joinpath(file_path, "ghz/ghz_5.yml")
input_file = joinpath(file_path, "ghz/ghz_5.jld2")
output_file = joinpath(file_path, "ghz/out.jld2")
results = execute(dsl_file, input_file, param_file, output_file;
use_mpi=parsed_args["mpi"],
sub_comm_size=parsed_args["sub-comm-size"],
use_gpu=parsed_args["gpu"])
if parsed_args["verbose"]
@info results
end
end
main(ARGS) | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 705 | using QXContexts
using MPI
using Logging
using Random
import QXContexts.Logger: @perf
MPI.Init()
comm = MPI.COMM_WORLD
rank = MPI.Comm_rank(comm)
#io = nothing
if MPI.Comm_size(comm) > 1
io = MPI.File.open(comm, "mpi_io.dat", read=true, write=true, create=true)
else
io = stdout
end
global_logger(QXContexts.Logger.QXLogger(io, Logging.Error, Dict{Any, Int64}(), false))
@info "Hello world, I am $(rank) of $(MPI.Comm_size(comm)) world-size"
@warn "Rank $(rank) doesn't like what you are doing"
@error "Rank $(rank) is very unhappy!"
if rank == 2
@info "Hello again from rank $(rank)"
a = @perf sum(rand(100,100) * rand(100,100))
println(a)
else
@warn "I AM RANK $(rank)"
end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 1537 | using Logging
using ArgParse
using MPI
using QXContexts
"""
main(ARGS)
QXContexts entry point
"""
function main(args)
s = ArgParseSettings("QXContexts")
@add_arg_table! s begin
"--sub-comm-size", "-s"
help = "The number of ranks to assign to each sub-communicator for partitions"
default = 1
arg_type = Int
"--mpi", "-m"
help = "Use MPI"
action = :store_true
"--gpu", "-g"
help = "Use GPU if available"
action = :store_true
"--verbose", "-v"
help = "Enable logging"
action = :store_true
end
parsed_args = parse_args(args, s)
if parsed_args["verbose"]
if parsed_args["mpi"]
if !MPI.Initialized() MPI.Init() end
global_logger(QXContexts.Logger.QXLoggerMPIPerRank())
else
global_logger(QXContexts.Logger.QXLogger())
end
end
file_path = @__DIR__
dsl_file = joinpath(file_path, "rqc/rqc_4_4_24.qx")
param_file = joinpath(file_path, "rqc/rqc_4_4_24.yml")
input_file = joinpath(file_path, "rqc/rqc_4_4_24.jld2")
output_file = joinpath(file_path, "rqc/out.jld2")
results = execute(dsl_file, input_file, param_file, output_file;
use_mpi=parsed_args["mpi"],
sub_comm_size=parsed_args["sub-comm-size"],
use_gpu=parsed_args["gpu"])
if parsed_args["verbose"]
@info results
end
end
main(ARGS) | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 422 | module QXContexts
using Reexport
include("logger.jl")
include("parameters.jl")
include("compute_graph/compute_graph.jl")
include("contexts/contexts.jl")
include("sampling.jl")
include("execution.jl")
include("sysimage/sysimage.jl")
@reexport using QXContexts.Logger
@reexport using QXContexts.Param
@reexport using QXContexts.ComputeGraphs
@reexport using QXContexts.Contexts
@reexport using QXContexts.Sampling
end
| QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 3737 | export execute, initialise_sampler, timer_output
using FileIO
using TimerOutputs
using CUDA
using QXContexts.Param
using QXContexts.ComputeGraphs
using QXContexts.Sampling
# const timer_output = TimerOutput()
# if haskey(ENV, "QXRUN_TIMER")
# timeit_debug_enabled() = return true # Function fails to be defined by enable_debug_timings
# TimerOutputs.enable_debug_timings(Execution)
# end
"""
write_results(results, output_file)
Save results from calculations for the given
"""
function write_results(results, output_file)
@assert splitext(output_file)[end] == ".jld2" "Output file should have jld2 suffix"
if results !== nothing
save(output_file, "results", results)
end
end
"""
initialise_sampler(dsl_file::String,
input_file::String,
param_file::String;
use_mpi::Bool=false,
sub_comm_size::Int=1,
use_gpu::Bool=false,
elt::Type=ComplexF32)
Initialise the sampler
"""
function initialise_sampler(dsl_file::String,
input_file::String,
param_file::String;
use_mpi::Bool=false,
sub_comm_size::Int=1,
use_gpu::Bool=false,
elt::Type=ComplexF32)
# read dsl file of commands and data file with initial tensors
@timeit "Parse input files" cg, _ = parse_dsl_files(dsl_file, input_file)
# Create a context to execute the commands in
T = if use_gpu
@assert CUDA.functional() "CUDA installation is not functional, ensure you have a GPU and appropriate drivers"
CuArray{elt}
else
Array{elt}
end
@timeit "Create Context" begin
ctx = QXContext{T}(cg)
if use_mpi
ctx = QXMPIContext(ctx, sub_comm_size=sub_comm_size)
end
end
# read sampler parameters from parameter file
sampler_args = parse_parameters(param_file)
# Create a sampler to produce bitstrings to get amplitudes for and a variable to store
# the results.
@timeit "Create sampler" create_sampler(ctx, sampler_args)
end
"""
execute(dsl_file::String,
input_file::Union{String, Nothing}=nothing,
param_file::Union{String, Nothing}=nothing,
output_file::String="";
max_amplitudes::Union{Int, Nothing}=nothing,
max_slices::Union{Int, Nothing}=nothing,
timings::Bool=false,
kwargs...)
Main entry point for running calculations. Loads input data, runs computations and
writes results to output files.
"""
function execute(dsl_file::String,
input_file::Union{String, Nothing}=nothing,
param_file::Union{String, Nothing}=nothing,
output_file::String="";
max_amplitudes::Union{Int, Nothing}=nothing,
max_slices::Union{Int, Nothing}=nothing,
timings::Bool=false,
kwargs...)
if input_file === nothing
input_file = splitext(dsl_file)[1] * ".jld2"
end
if param_file === nothing
param_file = splitext(dsl_file)[1] * ".yml"
end
@timeit "Init sampler" sampler = initialise_sampler(dsl_file, input_file, param_file;
kwargs...)
@timeit "Simulation" results = sampler(max_amplitudes=max_amplitudes,
max_slices=max_slices)
if output_file != "" && results !== nothing
@timeit "Write results" write_results(results, output_file)
if timings
print_timer()
end
end
results
end
| QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 7288 | module Logger
# Follow approach taken by https://github.com/CliMA/Oceananigans.jl logger.jl
export QXLogger, QXLoggerMPIPerRank, QXLoggerMPIShared
using Logging
using Dates
using MPI
using TimerOutputs
import UUIDs: UUID, uuid4
import Logging: shouldlog, min_enabled_level, catch_exceptions, handle_message
const PerfLogger = Logging.LogLevel(-125)
struct QXLogger <: Logging.AbstractLogger
stream::IO
min_level::Logging.LogLevel
message_limits::Dict{Any,Int}
show_info_source::Bool
session_id::UUID
end
struct QXLoggerMPIShared <: Logging.AbstractLogger
stream::Union{MPI.FileHandle, Nothing}
min_level::Logging.LogLevel
message_limits::Dict{Any,Int}
show_info_source::Bool
session_id::UUID
comm::MPI.Comm
end
struct QXLoggerMPIPerRank <: Logging.AbstractLogger
stream::Union{MPI.FileHandle, Nothing}
min_level::Logging.LogLevel
message_limits::Dict{Any,Int}
show_info_source::Bool
session_id::UUID
comm::MPI.Comm
root_path::String
end
"""
QXLogger(stream::IO=stdout, level=Logging.Info; show_info_source=false)
Single-process logger for QXContexts.
"""
function QXLogger(stream::IO=stdout, level=Logging.Info; show_info_source=false)
return QXLogger(stream, level, Dict{Any,Int}(), show_info_source, uuid4())
end
"""
QXLoggerMPIShared(stream=nothing,
level=Logging.Info;
show_info_source=false,
comm=MPI.COMM_WORLD,
path::String=".")
MPI-IO enabled logger that outputs to a single shared file for all ranks.
"""
function QXLoggerMPIShared(stream=nothing,
level=Logging.Info;
show_info_source=false,
comm=MPI.COMM_WORLD,
path::String=".")
if MPI.Initialized()
if MPI.Comm_rank(comm) == 0
log_uid = uuid4()
else
log_uid = nothing
end
log_uid = MPI.bcast(log_uid, 0, comm)
f_stream = MPI.File.open(comm, joinpath(path, "QXContexts_io_shared_$(log_uid).log"), read=true, write=true, create=true)
else
error("""MPI is required for this logger. Pleasure ensure MPI is initialised. Use `QXLogger` for non-distributed logging""")
end
return QXLoggerMPIShared(f_stream, level, Dict{Any,Int}(), show_info_source, log_uid, comm)
end
"""
QXLoggerMPIPerRank(stream=nothing,
level=Logging.Info;
show_info_source=false,
comm=MPI.COMM_WORLD,
path::String=".")
MPI-friendly logger that outputs to a new file per rank. Creates a UUIDs.uuid4 labelled directory and a per-rank log-file
"""
function QXLoggerMPIPerRank(stream=nothing,
level=Logging.Info;
show_info_source=false,
comm=MPI.COMM_WORLD,
path::String=".")
if MPI.Initialized()
if MPI.Comm_rank(comm) == 0
log_uid = uuid4()
else
log_uid = nothing
end
log_uid = MPI.bcast(log_uid, 0, comm)
else
throw("""MPI is required for this logger. Pleasure ensure MPI is initialised. Use `QXLogger` for non-distributed logging""")
end
return QXLoggerMPIPerRank(stream, level, Dict{Any,Int}(), show_info_source, log_uid, comm, path)
end
shouldlog(logger::QXLogger, level, _module, group, id) = get(logger.message_limits, id, 1) > 0
shouldlog(logger::QXLoggerMPIShared, level, _module, group, id) = get(logger.message_limits, id, 1) > 0
shouldlog(logger::QXLoggerMPIPerRank, level, _module, group, id) = get(logger.message_limits, id, 1) > 0
min_enabled_level(logger::QXLogger) = logger.min_level
min_enabled_level(logger::QXLoggerMPIShared) = logger.min_level
min_enabled_level(logger::QXLoggerMPIPerRank) = logger.min_level
catch_exceptions(logger::QXLogger) = false
catch_exceptions(logger::QXLoggerMPIShared) = false
catch_exceptions(logger::QXLoggerMPIPerRank) = false
function level_to_string(level)
level == Logging.Error && return "ERROR"
level == Logging.Warn && return "WARN"
level == Logging.Info && return "INFO"
level == Logging.Debug && return "DEBUG"
level == PerfLogger && return "PERF"
return string(level)
end
macro perf(expression)
if haskey(ENV, "QXRUN_TIMER")
ex = repr(expression)
return :(t = @elapsed res = $expression; Logging.@logmsg(PerfLogger, (t, $ex)); res)
else
return esc(expression)
end
end
"""
stamp_builder(rank::Int)
Builds logger timestamp with rank and hostname capture.
Rank defaults to 0 for single-process evaluations
"""
function stamp_builder(rank::Int)
io = IOBuffer()
write(io, Dates.format(Dates.now(), "[yyyy/mm/dd-HH:MM:SS.sss]"))
write(io, "[rank="*string(rank)*"]")
write(io, "[host="*gethostname()*"]")
s = String(take!(io))
close(io)
return s
end
function handle_message(logger::Union{QXLogger, QXLoggerMPIShared}, level, message, _module, group, id,
filepath, line; maxlog = nothing, kwargs...)
if !isnothing(maxlog) && maxlog isa Int
remaining = get!(logger.message_limits, id, maxlog)
logger.message_limits[id] = remaining - 1
remaining > 0 || return nothing
end
buf = IOBuffer()
level_name = level_to_string(level)
if MPI.Initialized() && MPI.Comm_rank(MPI.COMM_WORLD) > 1
rank = MPI.Comm_rank(logger.comm)
else
rank = 0
end
module_name = something(_module, "nothing")
msg_timestamp = stamp_builder(rank)
formatted_message = "$(msg_timestamp) $(level_name) $message"
if logger.show_info_source || level != Logging.Info
formatted_message *= " -@-> $(filepath):$(line)"
end
formatted_message *= "\n"
if typeof(logger.stream) <: IO
write(logger.stream, formatted_message)
else
MPI.File.write_shared(logger.stream, formatted_message)
end
return nothing
end
function handle_message(logger::QXLoggerMPIPerRank, level, message, _module, group, id,
filepath, line; maxlog = nothing, kwargs...)
if !isnothing(maxlog) && maxlog isa Int
remaining = get!(logger.message_limits, id, maxlog)
logger.message_limits[id] = remaining - 1
remaining > 0 || return nothing
end
if !isdir(joinpath(logger.root_path, "QXContexts_io_" * string(logger.session_id)))
mkdir(joinpath(logger.root_path, "QXContexts_io_" * string(logger.session_id)))
end
buf = IOBuffer()
rank = MPI.Comm_rank(logger.comm)
level_name = level_to_string(level)
log_path = joinpath(logger.root_path, "QXContexts_io_" * string(global_logger().session_id), "rank_$(rank).log")
file = open(log_path, read=true, write=true, create=true, append=true)
module_name = something(_module, "nothing")
msg_timestamp = stamp_builder(rank)
formatted_message = "$(msg_timestamp) $(level_name) $message"
if logger.show_info_source || level != Logging.Info
formatted_message *= " -@-> $(filepath):$(line)"
end
formatted_message *= "\n"
write(file, formatted_message)
close(file)
return nothing
end
end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 867 | module Param
export parse_parameters, SliceIterator
import YAML
using DataStructures
"""
parse_parameters(filename::String;
max_parameters::Union{Int, Nothing}=nothing)
Parse the parameters yml file to read information about partition parameters and output
sampling method.
Example Parameter file
======================
output:
method: List
params:
bitstrings:
- "01000"
- "01110"
- "10101"
"""
function parse_parameters(filename::String)
param_dict = YAML.load_file(filename, dicttype=OrderedDict{String, Any})
# parse the output method section of the parameter file
method_params = OrderedDict{Symbol, Any}(Symbol(x[1]) => x[2] for x in param_dict["output"])
method_params[:params] = OrderedDict{Symbol, Any}(Symbol(x[1]) => x[2] for x in method_params[:params])
method_params
end
end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 7735 | module Sampling
export ListSampler, RejectionSampler, UniformSampler
export create_sampler
using Random
using DataStructures
using QXContexts.Contexts
# Module containing sampler objects which provide different levels of sampling features.
# Each sampler has a constructor which takes a context to perform sampling in and a set
# of keyword arguments that control the sampling behavior.
#
# Sampler(ctx; kwargs...): Initialise the sampler
#
# Each sampler is also callable with arguments that control it's execution
#
# (s::Sampler)(kwargs...): Perform sampling and return sampling results
#
"""Abstract type for samplers"""
abstract type AbstractSampler end
"""Functions to generate random bitstrings"""
random_bitstring(rng, num_qubits) = prod(rand(rng, ["0", "1"], num_qubits))
random_bitstrings(rng, num_qubits, num_samples) = [random_bitstring(rng, num_qubits) for _ in 1:num_samples]
###############################################################################
# ListSampler
###############################################################################
"""
A Sampler struct to compute the amplitudes for a list of bitstrings.
"""
struct ListSampler <: AbstractSampler
ctx::AbstractContext
list::Vector{String}
end
"""
ListSampler(ctx
;bitstrings::Vector{String}=String[],
rank::Integer=0,
comm_size::Integer=1,
kwargs...)
Constructor for a ListSampler to produce a portion of the given `bitstrings` determined by
the given `rank` and `comm_size`.
"""
function ListSampler(ctx
;bitstrings::Vector{String}=String[],
kwargs...)
if haskey(kwargs, :num_samples)
n = kwargs[:num_samples]
n = min(n, length(bitstrings))
else
n = length(bitstrings)
end
ListSampler(ctx, bitstrings[1:n])
end
"""
(s::ListSampler)(max_amplitudes=nothing, kwargs...)
Callable for ListSampler struct. Calculates amplitudes for each bitstring in the list
"""
function (s::ListSampler)(;max_amplitudes=nothing, kwargs...)
bs = if max_amplitudes === nothing
s.list
else s.list[1:min(max_amplitudes, length(s.list))] end
amps = ctxmap(x -> compute_amplitude!(s.ctx, x; kwargs...), s.ctx, bs)
amps = ctxgather(s.ctx, amps)
if amps !== nothing return (bs, amps) end
end
create_sampler(ctx, sampler_params) = get_constructor(sampler_params[:method])(ctx ;sampler_params[:params]...)
get_constructor(func_name::String) = getfield(Main, Symbol(func_name*"Sampler"))
###############################################################################
# RejectionSampler
###############################################################################
"""
A Sampler struct to use rejection sampling to produce output.
"""
mutable struct RejectionSampler <: AbstractSampler
ctx::AbstractContext
num_qubits::Integer
num_samples::Integer
M::Real
fix_M::Bool
rng::MersenneTwister
end
"""
function RejectionSampler(;num_qubits::Integer,
num_samples::Integer,
M::Real=0.0001,
fix_M::Bool=false,
seed::Integer=42,
kwargs...)
Constructor for a RejectionSampler to produce and accept a number of bitstrings.
"""
function RejectionSampler(ctx::AbstractContext;
num_qubits::Integer,
num_samples::Integer,
M::Real=0.0001,
fix_M::Bool=false,
seed::Integer=42,
kwargs...)
# Evenly divide the number of bitstrings to be sampled amongst the subgroups of ranks.
# num_samples = get_rank_size(num_samples, comm_size, rank)
rng = MersenneTwister(seed) # TODO: should somehow add the rank to the seed, maybe with get_rank(ctx)?
RejectionSampler(ctx, num_qubits, num_samples, M, fix_M, rng)
end
"""
(s::RejectionSampler)(max_amplitudes=nothing, kwargs...)
Callable for RejectionSampler struct. Computes amplitudes for uniformly distributed bitstrings and corrects the distribution
using a rejection step.
"""
function (s::RejectionSampler)(;max_amplitudes=nothing, kwargs...)
num_samples = max_amplitudes === nothing ? s.num_samples : max_amplitudes
N = 2^s.num_qubits
M = s.M
samples = Samples()
accepted = 0
while accepted < num_samples
# produce cadidate bitstrings
bitstrings = random_bitstrings(s.rng, s.num_qubits, num_samples-accepted)
# compute amplitudes for the bitstrings
amps = [compute_amplitude!(s.ctx, bs; kwargs...) for bs in bitstrings]
# bs_amp_pairs = [bs => compute_amplitude!(s.ctx, bs; kwargs...) for bs in bitstrings if !(bs in keys(samples.amplitudes))]
# Record the computed amplitudes and update M if required
for (bs, amp) in zip(bitstrings, amps)
samples.amplitudes[bs] = amp
Np = N * abs(amp)^2
s.fix_M || (M = max(Np, M))
end
s.fix_M || (M = ctxreduce(max, s.ctx, M))
# Conduct a rejection step for each bitstring to correct the distribution of samples.
for (bs, amp) in zip(bitstrings, amps)
Np = N * abs(amp)^2 # This is computed twice
if rand(s.rng) < Np / M
accepted += 1
samples.bitstrings_counts[bs] += 1
end
end
end
ctxgather(s.ctx, samples)
samples
end
###############################################################################
# UniformSampler
###############################################################################
"""
A Sampler struct to uniformly sample bitstrings and compute their amplitudes.
"""
mutable struct UniformSampler <: AbstractSampler
ctx::AbstractContext
num_qubits::Integer
num_samples::Integer
rng::MersenneTwister
end
"""
UniformSampler(ctx::AbstractContext;
num_qubits::Integer,
num_samples::Integer,
seed::Integer=42,
kwargs...)
Constructor for a UniformSampler to uniformly sample bitstrings.
"""
function UniformSampler(ctx::AbstractContext;
num_qubits::Integer,
num_samples::Integer,
seed::Integer=42,
kwargs...)
# Evenly divide the number of bitstrings to be sampled amongst the subgroups of ranks.
# num_samples = (num_samples ÷ comm_size) + (rank < num_samples % comm_size)
rng = MersenneTwister(seed)
UniformSampler(ctx, num_qubits, num_samples, rng)
end
"""
(s::UniformSampler)(max_amplitudes=nothing, kwargs...)
Callable for UniformSampler struct. Computes amplitudes for uniformly distributed bitstrings.
"""
function (s::UniformSampler)(;max_amplitudes=nothing, kwargs...)
num_samples = max_amplitudes === nothing ? s.num_samples : max_amplitudes
bs = random_bitstrings(s.rng, s.num_qubits, num_samples)
amps = ctxmap(x -> compute_amplitude!(s.ctx, x; kwargs...), s.ctx, bs)
amps = ctxgather(s.ctx, amps)
(bs, amps)
end
###############################################################################
# Sampler Struct
###############################################################################
"""
Struct to hold the results of a simulation.
"""
struct Samples{T}
bitstrings_counts::DefaultDict{String, <:Integer}
amplitudes::Dict{String, T}
end
Samples() = Samples(DefaultDict{String, Int}(0), Dict{String, ComplexF32}())
Base.length(s::Samples) = sum(values(s.bitstrings_counts))
Base.unique(s::Samples) = keys(s.bitstrings_counts)
end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 6591 | export AbstractCommand, CommandList, params, inputs, output
export ContractCommand, LoadCommand, OutputCommand, SaveCommand, ReshapeCommand, ViewCommand
"""
We define structs for each command type with each implementing the interface
NameCommand(s::AbstractString): a constructor that initiates and instance from a str representation
write(io::IO, c::NameCommand): serialises to given io object
output(c::NameCommand): gives symbol of output from command
inputs(c::NameCommand): gives vector of inputs
params(c::NameCommand): dictionary of parameter symbols and dimensions
The command struct itself is callable with a default implementation which expects appropriate inputs
as arguments.
Summary of command and string format of each
Contract tensors: ncon <output_name:str> <output_idxs:list> <left_name:str> <left_idxs:list> <right_name:str> <right_idxs:list>
Load tensor: load <name:str> <label:str> <dims:list>
Save tensor: save <name:str> <label:str>
Reshape tensor: reshape <output:str> <input:str> <shape:list>
View tensor: view <name:str> <target:str> <slice_symbol:str> <bond_idx:Int> <bond_dim:Int>
Output: output <name:str> <idx:int> <dim:int>
"""
abstract type AbstractCommand end
params(::AbstractCommand) = Dict{Symbol, Int}()
CommandList = Vector{AbstractCommand}
"""Regex to match symbols"""
const sym_fmt = "[A-Z|a-z|0-9|_]*"
"""Structure to represent a contraction command"""
mutable struct ContractCommand <: AbstractCommand
output_name::Symbol
output_idxs::Vector{Int}
left_name::Symbol
left_idxs::Vector{Int}
right_name::Symbol
right_idxs::Vector{Int}
end
"""
ContractCommand(s::AbstractString)
Constructor which reads command from a string
"""
function ContractCommand(s::AbstractString)
p = match(Regex("ncon" * repeat(" ($sym_fmt) ([0-9|,]*)", 3)), s)
@assert p !== nothing "Command must begin with \"ncon\""
s = x -> Symbol(x)
l = x -> x == "0" ? Int[] : map(y -> parse(Int, y), split(x, ","))
ContractCommand(s(p[1]), l(p[2]), s(p[3]), l(p[4]), s(p[5]), l(p[6]))
end
"""
Base.write(io::IO, cmd::ContractCommand)
Function to serialise command to the given IO stream
"""
function Base.write(io::IO, cmd::ContractCommand)
j = x -> length(x) == 0 ? "0" : join(x, ",")
write(io, "ncon $(cmd.output_name) $(j(cmd.output_idxs)) $(cmd.left_name) $(j(cmd.left_idxs)) $(cmd.right_name) $(j(cmd.right_idxs))\n")
end
output(c::ContractCommand) = c.output_name
inputs(c::ContractCommand) = [c.left_name, c.right_name]
"""Represents a command to load a tensor from storage"""
struct LoadCommand <: AbstractCommand
name::Symbol
label::Symbol
dims::Vector{Int}
end
output(c::LoadCommand) = c.name
inputs(::LoadCommand) = []
"""
LoadCommand(s::AbstractString)
Constructor to create instance of command from a string
"""
function LoadCommand(s::AbstractString)
m = match(Regex("^load ($sym_fmt) ($sym_fmt) ([0-9|,]*)"), s)
@assert m !== nothing "Load command must have format \"load [name_sym] [src_sym] [dims]\""
dims = parse.([Int], split(m.captures[3], ","))
LoadCommand(Symbol.(m.captures[1:2])..., dims)
end
"""Function to serialise command to the given IO stream"""
function Base.write(io::IO, cmd::LoadCommand)
dims = join(cmd.dims, ",")
write(io, "load $(cmd.name) $(cmd.label) $(dims)\n")
end
"""Command to save a tensor to storage"""
struct SaveCommand <: AbstractCommand
name::Symbol
label::Symbol
end
output(c::SaveCommand) = c.name
inputs(c::SaveCommand) = [c.label]
"""
SaveCommand(s::AbstractString)
Constructor which creates a command instance form a string
"""
function SaveCommand(s::AbstractString)
m = match(Regex("^save ($sym_fmt) ($sym_fmt)"), s)
@assert m !== nothing "Save command must have format \"save [name_sym] [src_sym]\""
SaveCommand(Symbol.(m.captures)...)
end
"""Function to serialise command to the given IO stream"""
Base.write(io::IO, cmd::SaveCommand) = write(io, "save $(cmd.name) $(cmd.label)\n")
"""Represents a command to reshape a tensor"""
struct ReshapeCommand <: AbstractCommand
output::Symbol
input::Symbol
dims::Vector{Vector{Int}}
end
"""
ReshapeCommand(s::AbstractString)
Constructor to create reshape command from string representation
"""
function ReshapeCommand(s::AbstractString)
m = match(Regex("^reshape ($sym_fmt) ($sym_fmt) ([0-9|,|;]*)"), s)
@assert m !== nothing "Reshape command must have format \"reshape [output] [input] [dims]\""
dims = [parse.(Int, split(x, ",")) for x in split(m.captures[3], ";")]
ReshapeCommand(Symbol.(m.captures[1:2])..., dims)
end
output(c::ReshapeCommand) = c.output
inputs(c::ReshapeCommand) = [c.input]
function Base.write(io::IO, c::ReshapeCommand)
dims_str = join(join.(c.dims, [","]), ";")
write(io, "reshape $(c.output) $(c.input) $(dims_str)\n")
end
"""Struct to represent a view on a tensor"""
struct ViewCommand <: AbstractCommand
output_sym::Symbol
input_sym::Symbol
slice_sym::Symbol
bond_index::Int
bond_dim::Int
end
output(c::ViewCommand) = c.output_sym
inputs(c::ViewCommand) = [c.input_sym]
params(c::ViewCommand) = Dict{Symbol, Int}(c.slice_sym => c.bond_dim)
"""
ViewCommand(s::AbstractString)
Constructor to create view command from string representation
"""
function ViewCommand(s::AbstractString)
m = match(Regex("^view ($sym_fmt) ($sym_fmt) ($sym_fmt) ([0-9]*) ([0-9]*)"), s)
@assert m !== nothing "View command must have format \"view [output_sym] [input_sym] [slice_sym) [index] [dim]\""
ViewCommand(Symbol.(m.captures[1:3])..., parse.([Int], m.captures[4:5])...)
end
function Base.write(io::IO, c::ViewCommand)
write(io, "view $(c.output_sym) $(c.input_sym) $(c.slice_sym) $(c.bond_index) $(c.bond_dim)\n")
end
"""Commannd to communicate number of outputs"""
struct OutputCommand <: AbstractCommand
name::Symbol
idx::Int
dim::Int
end
output(c::OutputCommand) = c.name
inputs(::OutputCommand) = Symbol[]
params(c::OutputCommand) = Dict{Symbol, Int}(Symbol("o$(c.idx)") => c.dim)
"""
OutputCommand(s::AbstractString)
Constructor to create instance of command from string
"""
function OutputCommand(s::AbstractString)
m = match(Regex("^output ($sym_fmt) ([0-9]*) ([0-9]*)"), s)
@assert m !== nothing "Output command must have format \"output [name] [idx] [dim]\""
OutputCommand(Symbol(m.captures[1]), parse.([Int], m.captures[2:3])...)
end
"""Function to serialise command to a string"""
Base.write(io::IO, c::OutputCommand) = write(io, "output $(c.name) $(c.idx) $(c.dim)\n") | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 262 | module ComputeGraphs
# command definitions
include("cmds.jl")
# compute graph data structure
include("tree.jl")
# tree optimisation functions
include("tree_opt.jl")
# tree statistics
include("tree_stats.jl")
# functions to parse dsl file
include("dsl.jl")
end
| QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 5376 | using YAML
using DataStructures
using FileIO
# Functions for parsing and writing dsl files
export parse_dsl, parse_dsl_files, generate_dsl_files
# Define compatible DSL file version number, which must match when parsed.
const DSL_VERSION = VersionNumber("0.4.0")
###############################################################################
# DSL Parsing functions
###############################################################################
"""
write_version_header(io::IO)
Function to write version head to DSL file with current version constant
"""
function write_version_header(io::IO)
write(io, "# version: $(DSL_VERSION)\n")
end
"""
check_compatible_version_dsl(line::String)
Checks if version is defined in line and checks compatibility with VERSION_DSL
"""
function check_compatible_version_dsl(line::String)
exists_version_dsl = startswith(strip(line), '#') && occursin("version:", line)
is_compatible::Bool = true
if exists_version_dsl
version_dsl = strip(last(split(line,"version:")))
version_dsl = VersionNumber(version_dsl)
# Simple logic enforcing matching versions, which can be extended
is_compatible = version_dsl == DSL_VERSION
else
is_compatible = false
version_dsl = nothing
end
return is_compatible, version_dsl
end
"""
parse_command(line::String)
Parse a DSL command
"""
function parse_command(line::AbstractString)
m = match(r"^([a-z]*)", line)
command = nothing
if m !== nothing
type = m.captures[1]
command = if type == "load" LoadCommand(line)
elseif type == "view" command = ViewCommand(line)
elseif type == "ncon" command = ContractCommand(line)
elseif type == "save" command = SaveCommand(line)
elseif type == "output" command = OutputCommand(line)
elseif type == "reshape" command = ReshapeCommand(line)
else
error("$(type) command has not been implemented yet")
end
end
command
end
"""
parse_dsl(buffer::Vector{String})
Parse a list of DSL commands and generate a CommandList for execution
"""
function parse_dsl(buffer::Vector{<:AbstractString})
line = string(strip(first(buffer)))
is_compatible, version_dsl = check_compatible_version_dsl(line)
if !is_compatible
throw(ArgumentError("DSL version not compatible:\n\t'$version_dsl', expected '$DSL_VERSION'"))
end
# find index of first line that doesn't start with "#"
cmd_idx = findfirst(x -> x[1] != '#', buffer)
metadata_str = join(map(x -> replace(x, r"^# " => ""), buffer[2:cmd_idx-1]), "\n")
metadata = length(metadata_str) > 0 ? YAML.load(metadata_str) : OrderedDict()
cmds = Vector{AbstractCommand}()
for line in buffer[cmd_idx:end]
line_command = string(first(split(line, '#')))
if !isempty(line_command)
push!(cmds, parse_command(line_command))
end
end
build_tree(cmds), metadata
end
"""
parse_dsl(filename::String)
Read a DSL file and generate a CommandList for execution
"""
function parse_dsl(filename::String)
return open(filename) do file
parse_dsl(readlines(file))
end
end
"""
parse_dsl_files(dsl_file::String, data_file::String)
Read a DSL file and tensors file and return compute tree and meta data
"""
function parse_dsl_files(dsl_file::String, data_file::String)
root_node, metadata = parse_dsl(dsl_file)
@assert splitext(data_file)[end] == ".jld2" "Data file should have \".jld2\" suffix"
tensors = Dict(Symbol(x) => y for (x, y) in pairs(load(data_file)))
ComputeGraph(root_node, tensors), metadata
end
###############################################################################
# DSL writing functions
###############################################################################
"""
Base.write(io::IO, Union{ComputeNode, ComputeGraph}; metadata=nothing)
Write the compute tree to an ascii file. Optionally write meta_data
if present
"""
function Base.write(io::IO, cn::ComputeNode; metadata=nothing)
# first we write the version header
write_version_header(io)
if metadata !== nothing
# prepend each link of yaml output with "# "
yml_str = YAML.write(metadata)
yml_str = join(["# " * x for x in split(yml_str, "\n")], "\n") * "\n"
write(io, yml_str)
end
for each in PostOrderDFS(cn)
write(io, each.op)
end
end
"""
generate_dsl_files(compute_tree::ComputeGraph,
prefix::String;
force::Bool=true,
metadata=nothing)
Function to create dsl and data files to contract the given tensor network circuit
with the plan provided
"""
function generate_dsl_files(compute_tree::ComputeGraph,
prefix::String;
force::Bool=true,
metadata=nothing)
dsl_filename = "$(prefix).qx"
data_filename = "$(prefix).jld2"
@assert force || !isfile(dsl_filename) "Error $(dsl_filename) already exists"
@assert force || !isfile(data_filename) "Error $(data_filename) already exists"
open(dsl_filename, "w") do dsl_io
write(dsl_io, compute_tree.root; metadata=metadata)
end
save(data_filename, Dict(String(x) => y for (x, y) in pairs(compute_tree.tensors)))
nothing
end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 3244 | using AbstractTrees
using YAML
export build_tree, ComputeNode, ComputeGraph, get_commands, params
# In this file we define a tree data structure which can be used for optimisation passes
# over contraction commands
"""Generic tree node data structure"""
mutable struct ComputeNode{T}
op::Union{Nothing, T}
children::Vector{ComputeNode}
parent::ComputeNode
# Root constructor
ComputeNode{T}(data) where T = new{T}(data, ComputeNode[])
ComputeNode{T}() where T = new{T}(nothing, ComputeNode[])
# Child node constructor
ComputeNode{T}(data, parent::ComputeNode{U}) where {T, U} = new{T}(data, ComputeNode[], parent)
end
ComputeNode(op) = ComputeNode{typeof(op)}(op)
"""Represent a compute graph with root node and initial tensors"""
struct ComputeGraph
root::ComputeNode
tensors::Dict{Symbol, AbstractArray}
end
"""
get_commands(ct::ComputeNode, type::Type=Any; by=nothing, iterf=PostOrderDFS)
Utility function that retrieves a list of commands if given type and sorted according to the
provided criteria. By default they are returned in the order returned by depth first traversal
which returns leaves before parents.
"""
function get_commands(cn::ComputeNode, type::Type=Any; by=nothing, iterf=PostOrderDFS)
cmds = map(x -> x.op, filter(x -> x.op isa type, collect(iterf(cn))))
if by !== nothing
sort!(cmds, by=by)
end
cmds
end
"""Implement for ComputeGraph also"""
get_commands(cg::ComputeGraph, args...; kwargs...) = get_commands(cg.root, args...; kwargs...)
"""
AbstractTrees.children(node::ComputeNode)
Implement children function from AbstractTrees package
"""
function AbstractTrees.children(node::ComputeNode)
Tuple(node.children)
end
"""
params(node::ComputeNode, optype::Type=Any)
Compile all parameters from this node and descendents with optional op type
qualifier. This can be used to return only output or view parameters with
params(node, OutputCommand)
or
params(node, ViewCommand)
"""
function params(node::ComputeNode, optype::Type=Any)
local_params = node.op isa optype ? params(node.op) : Dict{Symbol, Int}()
merge!(local_params, params.(node.children, [optype])...)
end
params(cg::ComputeGraph, args...) = params(cg.root, args...)
output(node::ComputeNode) = output(node.op)
output(cg::ComputeGraph) = output(cg.root)
###########################################################################
# Functions and data structures for trees of contraction commands
###########################################################################
"""
build_tree(cmds::Vector{<: AbstractCommand})
Function to construct a tree from a list of commands
"""
function build_tree(cmds::Vector{<: AbstractCommand})
nodes = Dict{Symbol, ComputeNode}()
for op in cmds
node = ComputeNode(op)
for input in inputs(op)
if haskey(nodes, input)
push!(node.children, nodes[input])
nodes[input].parent = node
end
end
nodes[output(op)] = node
end
parentless = collect(keys(filter(x -> !isdefined(x[2], :parent), nodes)))
@assert length(parentless) == 1 "Only root node should have no parent"
root = parentless[1]
nodes[root]
end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 13963 | # """
# Functions for optmising tree
# """
# """
# permute_and_merge!(node::ComputeNode, new_index_order=nothing)
# Descend contraction tree simplifying contraction commands by permuting and merging together
# indices so that contractions operations are more efficient. When not a contraciton or reshape
# node we can ignore and pass ordering on.
# """
# function permute_and_merge!(node::ComputeNode, new_index_order=nothing)
# permute_and_merge!.(node.children, [new_index_order])
# nothing
# end
# """
# permute_and_merge!(node::ComputeNode{ReshapeCommand}, new_index_order=nothing)
# Descend contraction tree simplifying contraction commands by permuting and merging together
# indices so that contractions operations are more efficient. For a reshape command, modify
# the index ordering to apply to indices pre-reshape operation.
# """
# function permute_and_merge!(node::ComputeNode{ReshapeCommand}, new_index_order=nothing)
# if new_index_order !== nothing
#
# end
# permute_and_merge!.(node.children, [new_index_order])
# nothing
# end
# """
# permute_and_merge!(node::ComputeNode{ContractCommand}, new_index_order=nothing)
# Descend contraction tree simplifying contraction commands by permuting and merging together
# indices so that contractions operations are more efficient.
# For example, when there are a sequence of contractions like
# ````
# ncon c 1,2,3,4 a 1,2,3,4 b 0
# ncon e 1,2,3,4 d 1,2,3,4 e 0
# ncon f 1,4,5,6 c 1,2,3,4 e 5,6,3,2
# ```
# in the last contraction indices 2,3 are contracted and thus can be merged. We first permute as
# ```
# ncon d 1,4,5,6 c 2,3,1,4 e 2,3,5,6
# ```
# and then joining (2,3) => 2 which gives
# ```
# ncon ncon d 1,4,5,6 c 2,1,4 e 2,5,6
# ```
# This would require previous commands to be changed so full set would be
# ````
# ncon c (2,3),1,4 a 1,2,3,4 b 0
# ncon e (4,3),1,2 d 1,2,3,4 e 0
# ncon f 1,4,5,6 c 2,1,4 e 2,5,6
# ```
# where the parentheses indicate that these indices should be merged. To achieve the above open
# would send the following new_index_order arrays to left and right children.
# Left: [[2, 3], 1, 4]
# Right: [[4, 3], 1 ,2]
# Additionally, any indices appearing in all three tuples should be moved to the end.
# In this case we would have a recursive process which passes down indices list
# """
# function permute_and_merge!(node::ComputeNode{ContractCommand}, new_index_order=nothing)
# _permute_and_merge!(node.data, new_index_order)
# # identify batched, common and remaining indices
# batch_idxs = batched_indices(node.data) # these go at the end
# common_idxs = setdiff(intersect(node.data.left_idxs, node.data.right_idxs), batch_idxs) # these go at the start in order appear in left
# # we choose to order by the highest rank child
# if length(node.data.right_idxs) > length(node.data.left_idxs)
# common_idxs = filter(x -> x in common_idxs, node.data.right_idxs) # ensure it's ordered by right idxs
# else
# common_idxs = filter(x -> x in common_idxs, node.data.left_idxs) # ensure it's ordered by left idxs
# end
# remaining_idxs = setdiff(union(node.data.left_idxs, node.data.right_idxs), union(common_idxs, batch_idxs)) # these go in the middle
# remaining_idxs = filter(x -> x in remaining_idxs, node.data.output_idxs) # reorder by output
# if isdefined(node, :left)
# l_index_map = Dict(x => i for (i, x) in enumerate(node.data.left_idxs))
# m = x -> [l_index_map[y] for y in x]
# left_remaining_idxs = filter(x -> x in node.data.left_idxs, remaining_idxs)
# # left_remaining_idx_groups = find_overlaps(node.data.output_idxs, left_remaining_idxs)
# new_index_order = length(common_idxs) == 0 ? Vector{Vector{Int}}() : [m(common_idxs)]
# append!(new_index_order, [map(x -> [x], m(left_remaining_idxs))..., map(x -> [x], m(batch_idxs))...])
# # append!(new_index_order, [m.(left_remaining_idx_groups)..., map(x -> [x], m(batch_idxs))...])
# # @show left_remaining_idxs, remaining_idxs
# permute_and_merge!(node.left, new_index_order)
# new_left_idxs = length(common_idxs) > 0 ? [common_idxs[1]] : Int[]
# append!(new_left_idxs, [left_remaining_idxs..., batch_idxs...])
# empty!(node.data.left_idxs)
# append!(node.data.left_idxs, new_left_idxs)
# end
# if isdefined(node, :right)
# r_index_map = Dict(x => i for (i, x) in enumerate(node.data.right_idxs))
# m = x -> [r_index_map[y] for y in x]
# right_remaining_idxs = filter(x -> x in node.data.right_idxs, remaining_idxs)
# # right_remaining_idx_groups = find_overlaps(node.data.output_idxs, right_remaining_idxs)
# new_index_order = length(common_idxs) == 0 ? Vector{Vector{Int}}() : [m(common_idxs)]
# append!(new_index_order, [map(x -> [x], m(right_remaining_idxs))..., map(x -> [x], m(batch_idxs))...])
# # append!(new_index_order, [m.(right_remaining_idx_groups)..., map(x -> [x], m(batch_idxs))...])
# permute_and_merge!(node.right, new_index_order)
# new_right_idxs = length(common_idxs) > 0 ? [common_idxs[1]] : Int[]
# append!(new_right_idxs, [right_remaining_idxs..., batch_idxs...])
# empty!(node.data.right_idxs)
# append!(node.data.right_idxs, new_right_idxs)
# end
# nothing
# end
# batched_indices(cmd) = intersect(cmd.output_idxs, cmd.left_idxs, cmd.right_idxs)
# function _permute_and_merge!(cmd::ContractCommand, new_index_order)
# if new_index_order !== nothing
# # get mapping between indices as seen by next command, ordered 1,2..n where n is rank
# # and grouped according to reshape_groups
# current_idxs = Dict{Int, Vector{Int}}()
# pos = 1
# for (i, group_size) in enumerate(cmd.reshape_groups)
# current_idxs[i] = cmd.output_idxs[pos:pos+group_size-1]
# pos += group_size
# end
# flat_order = vcat(new_index_order...)
# # @show cmd.output_name, new_index_order
# # @show cmd.output_idxs, vcat(map(x -> current_idxs[x], flat_order)...)
# cmd.output_idxs[:] = vcat(map(x -> current_idxs[x], flat_order)...)
# # update reshape groups to reflect any new merges
# new_reshape_groups = []
# pos = 1
# for l in length.(new_index_order)
# push!(new_reshape_groups, sum(cmd.reshape_groups[pos:pos+l-1]))
# pos += l
# end
# empty!(cmd.reshape_groups)
# append!(cmd.reshape_groups, new_reshape_groups)
# end
# end
# """
# join_remaining!(node::ComputeNode{ContractCommand}, new_index_order=nothing)
# Identify groups of indices that are grouped in the output indices and also appear in the same
# order in the left and right index groups. Merge these in left and right index groups and pass
# this grouping to children
# Example:
# """
# function join_remaining!(node::ComputeNode{ContractCommand}, new_index_order=nothing)
# _join_remaining!(node.data, new_index_order)
# # find output indices that are merged in the reshape after the contraction
# pos = 1
# output_groups = Vector{Vector{Int}}()
# for j in node.data.reshape_groups
# push!(output_groups, node.data.output_idxs[pos:pos+j-1])
# pos += j
# end
# # identify batched, common and remaining indices
# batch_idxs = batched_indices(node.data) # these go at the end
# common_idxs = setdiff(intersect(node.data.left_idxs, node.data.right_idxs), batch_idxs) # these go at the start in order appear in left
# common_idxs = filter(x -> x in common_idxs, node.data.left_idxs) # ensure it's ordered by left idxs
# remaining_idxs = setdiff(union(node.data.left_idxs, node.data.right_idxs), union(common_idxs, batch_idxs)) # these go in the middle
# remaining_idxs = filter(x -> x in remaining_idxs, node.data.output_idxs) # reorder by output
# if isdefined(node, :left)
# l_index_map = Dict(x => i for (i, x) in enumerate(node.data.left_idxs))
# m = x -> [l_index_map[y] for y in x]
# left_remaining_idxs = filter(x -> x in node.data.left_idxs, remaining_idxs)
# left_remaining_idx_groups = find_overlaps(output_groups, left_remaining_idxs)
# # we join these in the output idxs
# join_output_idxs!(node.data, left_remaining_idx_groups)
# # we replace each index with it's possition and then call join_remaing on node
# new_index_order = length(common_idxs) == 0 ? Vector{Vector{Int}}() : [m(common_idxs)]
# append!(new_index_order, [m.(left_remaining_idx_groups)..., map(x -> [x], m(batch_idxs))...])
# join_remaining!(node.left, new_index_order)
# new_left_idxs = length(common_idxs) > 0 ? [common_idxs[1]] : Int[]
# append!(new_left_idxs, [map(x -> x[1], left_remaining_idx_groups)..., batch_idxs...])
# empty!(node.data.left_idxs)
# append!(node.data.left_idxs, new_left_idxs)
# end
# if isdefined(node, :right)
# r_index_map = Dict(x => i for (i, x) in enumerate(node.data.right_idxs))
# m = x -> [r_index_map[y] for y in x]
# right_remaining_idxs = filter(x -> x in node.data.right_idxs, remaining_idxs)
# right_remaining_idx_groups = find_overlaps(output_groups, right_remaining_idxs)
# # we join these in the output idxs
# join_output_idxs!(node.data, right_remaining_idx_groups)
# # we replace each index with it's possition and then call join_remaing on node
# new_index_order = length(common_idxs) == 0 ? Vector{Vector{Int}}() : [m(common_idxs)]
# append!(new_index_order, [m.(right_remaining_idx_groups)..., map(x -> [x], m(batch_idxs))...])
# join_remaining!(node.right, new_index_order)
# new_right_idxs = length(common_idxs) > 0 ? [common_idxs[1]] : Int[]
# append!(new_right_idxs, [map(x -> x[1], right_remaining_idx_groups)..., batch_idxs...])
# empty!(node.data.right_idxs)
# append!(node.data.right_idxs, new_right_idxs)
# end
# nothing
# end
# function _join_remaining!(cmd::ContractCommand, new_index_order)
# if new_index_order !== nothing
# # get mapping between indices as seen by next command, ordered 1,2..n where n is rank
# # and grouped according to reshape_groups
# current_idxs = Dict{Int, Vector{Int}}()
# pos = 1
# for (i, group_size) in enumerate(cmd.reshape_groups)
# current_idxs[i] = cmd.output_idxs[pos:pos+group_size-1]
# pos += group_size
# end
# flat_order = vcat(new_index_order...)
# cmd.output_idxs[:] = vcat(map(x -> current_idxs[x], flat_order)...)
# # update reshape groups to reflect any new merges
# new_reshape_groups = []
# pos = 1
# for l in length.(new_index_order)
# push!(new_reshape_groups, sum(cmd.reshape_groups[pos:pos+l-1]))
# pos += l
# end
# empty!(cmd.reshape_groups)
# append!(cmd.reshape_groups, new_reshape_groups)
# end
# end
# """
# find_overlaps(arrays::Vector{<: Vector{T}}, sub_array::Vector{T}) where T
# Given a set of arrays and a single sub array, this function will return subsets
# of the sub_array which appear in sequence next to each other
# Example:
# arrays: [[1,2,3], [8,9,12,14]]
# sub_array: [1,2,9,12,13,14]
# should return
# [[1,2],[9,12],[14]]
# """
# function find_overlaps(arrays::Vector{<: Vector{T}}, sub_array::Vector{T}) where T
# sub_pos = 1
# groups = Vector{Vector{T}}()
# while sub_pos <= length(sub_array)
# array_pos = findfirst(x -> sub_array[sub_pos] in x, arrays)
# if array_pos !== nothing
# array = arrays[array_pos]
# pos = findfirst(x -> x == sub_array[sub_pos], array)
# i = 1
# while pos + i <= length(array) &&
# sub_pos + i <= length(sub_array) &&
# array[pos+i] == sub_array[sub_pos+i]
# i += 1
# end
# push!(groups, sub_array[sub_pos:sub_pos+i-1])
# else i = 1 end
# sub_pos += i
# end
# groups
# end
# """
# group_elements(group_sizes::Vector{Int}, elements::Vector)
# Function to group elements in a vector according to given group sizes
# """
# function group_elements(group_sizes::Vector{Int}, elements::Vector)
# pos = 1
# @assert sum(group_sizes) == length(elements) "Sum of group sizes should match vector length"
# groups = Vector{Vector{eltype(elements)}}()
# for j in group_sizes
# push!(groups, elements[pos:pos+j-1])
# pos += j
# end
# groups
# end
# """
# join_output_idxs!(cmd::ContractCommand, groups)
# Given a list of groups of indices we can replace these groups in the output
# index set with the first index in each group. We also update the reshape
# groups to take this into account
# Example:
# --------
# output_idxs: 4,5,2,1
# reshape_groups: 3,1
# groups: [[4,5],[2],[1]]
# expected output:
# ----------------
# output_idxs: 4,2,1
# reshape_groups: 2,1
# """
# function join_output_idxs!(cmd::ContractCommand, groups)
# output_groups = group_elements(cmd.reshape_groups, cmd.output_idxs)
# for group in groups
# if length(group) > 1
# # find relevant output group
# output_group_idx = findfirst(x -> length(intersect(x, group)) > 0, output_groups)
# # replace indices in this group with first element
# output_group = output_groups[output_group_idx]
# pos = findfirst(x -> x == group[1], output_group)
# for _ in 1:length(group)-1 popat!(output_group, pos+1) end
# end
# end
# empty!(cmd.output_idxs)
# append!(cmd.output_idxs, vcat(output_groups...))
# empty!(cmd.reshape_groups)
# append!(cmd.reshape_groups, length.(output_groups))
# end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 1389 | export cost, max_degree, depth, balance
indices(c::ContractCommand) = c.left_idxs, c.right_idxs, c.output_idxs
cost(c::AbstractCommand) = 0
"""
cost(cmd::ContractCommand)::Number
Function to calculate the cost of the given contraction in FLOPS
"""
function cost(cmd::ContractCommand)::Number
a, b, c = indices(cmd)
batched_idxs = intersect(a, b, c)
a = setdiff(a, batched_idxs)
b = setdiff(b, batched_idxs)
c = setdiff(c, batched_idxs)
common = intersect(a, b)
remaining = symdiff(a, b)
1 << (length(common) + length(remaining) + length(batched_idxs))
end
max_degree(c::ContractCommand) = maximum(length.(indices(c)))
cost(n::ComputeNode) = cost(n.op) + sum(cost.(children(n)), init=0)
max_degree(n::ComputeNode) = maximum(max_degree.(children(n)), init=0)
max_degree(n::ComputeNode{ContractCommand}) = maximum([max_degree(n.op), max_degree.(children(n))...])
depth(n::ComputeNode) = 1 + maximum(depth.(children(n)), init=0)
Base.length(n::ComputeNode) = 1 + sum(length.(children(n)), init=0)
balance(n::ComputeNode) = ceil(log2(length(n)))/depth(n)
"""Implement show for compute node to display some useful metrics"""
function Base.show(io::IO, c::ComputeNode{T}) where T
print(io, "ComputeNode{$(T)}: children: $(length(c)-1), ",
"depth: $(depth(c)), balance: $(balance(c)), ",
"max_degree: $(max_degree(c))")
end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 10706 | """
Here we define the interface that all contexts implement and provide a simple implementation
Constructor takes compute graph and any implementation specific parameters and initialises initial
tensor and parameter storage
Each context implements the following functions to access tensor and parameter information:
gettensor(ctx, sym): get the tensor for the given symbol
settensor!(ctx, value, sym): set the tensor data for the given symbol
deletetensor!(ctx, sym): delete the tensor after it is no longer required
Base.getindex(ctx, sym): get the parameter value corresponding to given symbol
Base.setindex!(ctx, value, sym): set the parameter value corresponding to given symbol
Base.haskey(ctx, sym): Check if the parameter key exists
Base.zeros(ctx, size): create array of zeros with appropritate type for context
Base.zero(ctx): create scalar with same type used in context
Base.eltype(ctx): return the element type of numeric datastructures
set_open_bonds!(ctx, bitstring::String): Set output parameters according to provided bitstring
set_slice_vals!(ctx, slice_values::Vector{Int}): Set output parameters according to provided slice values
(c::ctx)(): Execute the compute graph in the provided context. Returns final tensor
compute_amplitude!(ctx, bitstring; num_slices=nothing): Contracts the network to compute the given bitstring. Includes reduction over slices
Also provides an implementation of each command which uses above functions to get required
tensors and parameters. For example contraction command this function is defined as
(c::ContractionCommand)(ctx): implenents the contraction described by the given contraction index with the context
Each context will also implement the following functions to be implemented by distributed contexts
ctxmap(f, ctx, items): Applies function f to items
ctxreduce(f, ctx, items): Performs reduction using the function f on items
ctxgather(ctx, items): Gathers all items
"""
using QXContexts.ComputeGraphs
using OMEinsum
using DataStructures
using CUDA
export gettensor, settensor!, deletetensor!, set_open_bonds!, set_slice_vals!
export AbstractContext, QXContext, compute_amplitude!
export ctxmap, ctxgather, ctxreduce
abstract type AbstractContext end
##################################################################################
# Provide implementation of each command
##################################################################################
"""
(c::ContractCommand)(ctx::AbstractContext)
Execute a contraction command in the given context
"""
function (c::ContractCommand)(ctx::AbstractContext)
output_idxs = Tuple(c.output_idxs)
left_idxs = Tuple(c.left_idxs)
right_idxs = Tuple(c.right_idxs)
@debug "ncon DSL command: $(c.output_name)[$(output_idxs)] = $(c.left_name)[$(c.left_idxs)] * $(c.right_name)[$(c.right_idxs)]"
@debug "ncon shapes: left_size=$(size(gettensor(ctx, c.left_name))), right_size=$(size(gettensor(ctx, c.right_name)))"
@nvtx_range "NCON $(c.output_name)" begin
settensor!(ctx, EinCode((left_idxs, right_idxs), output_idxs)(gettensor(ctx, c.left_name), gettensor(ctx, c.right_name)), c.output_name)
end
deletetensor!(ctx, c.left_name)
deletetensor!(ctx, c.right_name)
nothing
end
"""
(c::LoadCommand)(ctx::AbstractContext)
Generic implementation of load command
"""
function (c::LoadCommand)(ctx::AbstractContext)
@nvtx_range "Load $(c.label)" begin
settensor!(ctx, gettensor(ctx, c.label), c.name)
end
end
"""
(c::SaveCommand)(ctx::AbstractContext)
Generic implementation of save command
"""
function (c::SaveCommand)(ctx::AbstractContext)
@nvtx_range "Load $(c.label)" begin
settensor!(ctx, gettensor(ctx, c.label), c.name)
end
end
"""
(c::ReshapeCommand)(ctx::AbstractContext)
Implementation of reshape command
"""
function (c::ReshapeCommand)(ctx::AbstractContext)
tensor_dims = size(gettensor(ctx, c.input))
new_dims = [prod([tensor_dims[y] for y in x]) for x in c.dims]
@nvtx_range "Reshape $(c.output)" begin
settensor!(ctx, reshape(gettensor(ctx, c.input), new_dims...), c.output)
end
@debug "Reshape DSL command: name=$(c.input)($(tensor_dims)) -> $(c.output)($(new_dims))"
nothing
end
"""
(c::ViewCommand)(ctx::AbstractContext)
Execute the given view command using provided context
"""
function (c::ViewCommand)(ctx::AbstractContext)
bond_val = haskey(ctx, c.slice_sym) ? ctx[c.slice_sym] : nothing
@nvtx_range "View $(c.output_sym)" begin
if bond_val !== nothing
dims = size(gettensor(ctx, c.input_sym))
view_index_list = [i == c.bond_index ? UnitRange(bond_val, bond_val) : UnitRange(1, dims[i]) for i in 1:length(dims)]
new_tensor = @view gettensor(ctx, c.input_sym)[view_index_list...]
settensor!(ctx, new_tensor, c.output_sym)
@debug "view DSL command: $(c.output_sym) = $(c.input_sym)[$(view_index_list)]"
else
settensor!(ctx, gettensor(ctx, c.input_sym), c.output_sym)
@debug "view DSL command: $(c.output_sym) = $(c.input_sym)"
end
end
nothing
end
"""
(c::OutputCommand)(ctx::AbstractContext)
Execute the given output command using provided context
"""
function (c::OutputCommand)(ctx::AbstractContext)
@nvtx_range "Output $(c.idx)" begin
sym = Symbol("o$(c.idx)")
@assert haskey(ctx, sym) "Output $sym not set in context"
out_val = ctx[sym]
settensor!(ctx, gettensor(ctx, Symbol("output_$out_val")), c.name)
@debug "output DSL command: $(c.name) = o$(c.idx)"
end
end
##########################################################################################
# Provide concreate implementation of a context
#########################################################################################
"""Data structure for context implementation"""
struct QXContext{T} <: AbstractContext
params::Dict{Symbol, Int}
tensors::Dict{Symbol, T}
cg::ComputeGraph
slice_dims::OrderedDict{Symbol, Int}
output_dims::OrderedDict{Symbol, Int}
end
"""
QXContext{T}(cg::ComputeGraph) where T
Constuctor which initialises instance from a compute graph
"""
function QXContext{T}(cg::ComputeGraph) where T
tensors = Dict{Symbol, T}()
# load tensors from compute graph into this dictionary
for (k, v) in pairs(cg.tensors)
tensors[k] = convert(T, v)
end
slice_dims = convert(OrderedDict, params(cg, ViewCommand))
output_dims = convert(OrderedDict, params(cg, OutputCommand))
dims = collect(values(output_dims))
if length(dims) > 0
@assert all(dims .== dims[1]) "Multiple output dimensions not supported"
for d in 1:dims[1]
t = zeros(eltype(T), dims[1])
t[d] = 1.
tensors[Symbol("output_$(d-1)")] = convert(T, t)
end
end
sort!(slice_dims)
sort!(output_dims)
QXContext{T}(Dict{Symbol, Int}(), tensors, cg, slice_dims, output_dims)
end
QXContext(cg::ComputeGraph) = QXContext{Array{ComplexF32}}(cg)
"""
gettensor(ctx::QXContext, sym)
Function to retrieve tensors by key
"""
function gettensor(ctx::QXContext, sym)
@assert haskey(ctx.tensors, sym) "Tensor $sym does not exist in this context"
ctx.tensors[sym]
end
"""
settensor!(ctx::QXContext, sym)
Function to set tensors by key
"""
function settensor!(ctx::QXContext, value, sym)
ctx.tensors[sym] = value
end
"""
deletetensor!(ctx::QXContext, sym)
Function to delte tensors by key
"""
function deletetensor!(ctx::QXContext, sym)
delete!(ctx.tensors, sym)
end
"""Implement has key to check if parameter by this name present"""
Base.haskey(ctx::QXContext, sym) = haskey(ctx.params, sym)
"""
Base.getindex(ctx::QXContext, sym)::Int
Implement getindex for retrieving parameter values by key
"""
function Base.getindex(ctx::QXContext, sym)::Int
@assert haskey(ctx, sym) "Parameter $sym does not exist in this context"
ctx.params[sym]
end
"""
Base.setindex!(ctx::QXContext, sym)::Int
Implement setindex! for setting parameter values
"""
function Base.setindex!(ctx::QXContext, value, sym)
ctx.params[sym] = value
end
Base.zeros(::QXContext{T}, size) where T = convert(T, zeros(eltype(T), size))
Base.zero(::QXContext{T}) where T = zero(eltype(T))
Base.eltype(::QXContext{T}) where T = eltype(T)
"""
set_open_bonds!(ctx::QXContext, bitstring::String)
Given a bitstring, set the open bonds to values so contracting the network will
calculate the amplitude of this bitstring
"""
function set_open_bonds!(ctx::QXContext, bitstring::String="")
if bitstring == "" bitstring = "0"^length(ctx.output_dims) end
@assert length(bitstring) == length(ctx.output_dims) "Bitstring length must match nubmer of outputs"
for (i, key) in enumerate(keys(ctx.output_dims)) ctx[key] = parse(Int, bitstring[i]) end
end
"""
set_slice_vals!(ctx::QXContext, slice_values::Vector{Int})
For each bond that is being sliced set the dimension to slice on.
"""
function set_slice_vals!(ctx::QXContext, slice_values::Vector{Int})
# set slice values and remove any already set
for (i, key) in enumerate(keys(ctx.slice_dims))
if i > length(slice_values)
delete!(ctx.params, key)
else
ctx[key] = slice_values[i]
end
end
end
"""
(ctx::QXContext)()
Funciton to execute compute graph when struct is called. Returns final tensor
"""
function (ctx::QXContext)()
for n in get_commands(ctx.cg) n(ctx) end
gettensor(ctx, output(ctx.cg))
end
"""
compute_amplitude!(ctx::QXContext, bitstring::String; max_slices=nothing)
Calculate a single amplitude with the given context and bitstring. Involves a sum over
contributions from each slice. Can optionally set the number of bonds. By default all slices
are used.
"""
function compute_amplitude!(ctx::QXContext, bitstring::String; max_slices=nothing)
set_open_bonds!(ctx, bitstring)
amplitude = nothing
for p in SliceIterator(ctx.cg, max_slices=max_slices)
set_slice_vals!(ctx, p)
if amplitude === nothing
amplitude = ctx()
else
amplitude += ctx()
end
end
amplitude = convert(Array, amplitude) # if a gpu array convert back to gpu
if ndims(amplitude) == 0
amplitude = amplitude[]
end
amplitude
end
"""Map over items as placeholder for more complicated contexts"""
ctxmap(f, ctx::QXContext, items) = map(f, items)
"""Simple gather as placeholder for distributed contexts"""
ctxgather(ctx::QXContext, items) = items
"""Simple gather as placeholder for distributed contexts"""
ctxreduce(f, ctx::QXContext, items) = reduce(f, items) | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 333 | module Contexts
# some cuda related utilities
include("cuda.jl")
# data structures for iterating over slices
include("slices.jl")
# context interface and QXContext implementation
include("base.jl")
# MPI context implementation
include("mpi_context.jl")
# Context using Distributed.jl implementation
# include("dist_context.jl")
end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 274 |
""""Macro which adds NVTX range only if CUDA is functional"""
macro nvtx_range(label, ex)
if CUDA.functional()
return quote
NVTX.@range $(esc(label)) $(esc(ex))
end
else
return quote
$(esc(ex))
end
end
end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 5872 | using MPI
using Lazy
using Logging
export QXMPIContext
export get_rank_size, get_rank_start, get_rank_range
# Implementation of QXMPIContext which provides a Context that can be used with MPI
#
# The QXMPIContext struct contains a reference to another context struct which is used
# to perform contractions for individual sets of slice values
#
# Utility functions get_rank_size, get_rank_start and get_rank_range are provided as
# generic utility functions for working with multiple processing ranks
"""MPI context which can be used to perform distribted computations"""
struct QXMPIContext <: AbstractContext
serial_ctx::AbstractContext
comm::MPI.Comm # communicator containing all ranks
sub_comm::MPI.Comm # communicator containing ranks in subgroup, used for partitions
root_comm::MPI.Comm # communicator containing ranks of matching rank from other sub_comms
end
"""
QXMPIContext(ctx::QXContext, comm::MPI.Comm, sub_comm_size::Int=1)
Constructor for QXMPIContext that initialises new sub-communicators for managing groups
of nodes.
"""
function QXMPIContext(ctx::QXContext,
comm::Union{MPI.Comm, Nothing}=nothing;
sub_comm_size::Int=1)
if comm === nothing
if !MPI.Initialized()
MPI.Init()
@info "MPI Initialised"
end
comm = MPI.COMM_WORLD
end
@info "Number processes $(MPI.Comm_size(MPI.COMM_WORLD))"
@assert MPI.Comm_size(comm) % sub_comm_size == 0 "sub_comm_size must divide comm size evenly"
sub_comm = MPI.Comm_split(comm, MPI.Comm_rank(comm) ÷ sub_comm_size, MPI.Comm_rank(comm) % sub_comm_size)
root_comm = MPI.Comm_split(comm, MPI.Comm_rank(comm) % sub_comm_size, MPI.Comm_rank(comm) ÷ sub_comm_size)
QXMPIContext(ctx, comm, sub_comm, root_comm)
end
######################################################################################
# Forward each of the methods to work with serial_ctx from the QXMPIContext
######################################################################################
@forward QXMPIContext.serial_ctx gettensor
@forward QXMPIContext.serial_ctx settensor!
@forward QXMPIContext.serial_ctx Base.getindex
@forward QXMPIContext.serial_ctx Base.setindex!
@forward QXMPIContext.serial_ctx Base.haskey
@forward QXMPIContext.serial_ctx Base.zeros
@forward QXMPIContext.serial_ctx Base.zero
@forward QXMPIContext.serial_ctx Base.eltype
@forward QXMPIContext.serial_ctx set_open_bonds!
@forward QXMPIContext.serial_ctx set_slice_vals!
"""Make struct callable"""
(ctx::QXMPIContext)(args...; kwargs...) = ctx.serial_ctx(args...; kwargs...)
"""
compute_amplitude!(ctx, bitstring::String; num_slices=nothing)
Calculate a single amplitude with the given context and bitstring. Involves a sum over
contributions from each slice. Can optionally set the number of bonds. By default all slices
are used.
"""
function compute_amplitude!(ctx::QXMPIContext, bitstring::String; max_slices=nothing)
set_open_bonds!(ctx, bitstring)
amplitude = nothing
si = SliceIterator(ctx.serial_ctx.cg, max_slices=max_slices)
r = get_comm_range(ctx.sub_comm, length(si))
for p in SliceIterator(si, r.start, r.stop)
set_slice_vals!(ctx, p)
if amplitude === nothing
amplitude = ctx()
else
amplitude += ctx()
end
end
# reduce across sub_communicator
# TODO: replace reduce with batched reduce or use one-sided communication to accumulate
# to root of sub_comm
MPI.Reduce!(amplitude, +, 0, ctx.sub_comm)
if MPI.Comm_rank(ctx.sub_comm) != 0
return nothing
end
if ndims(amplitude) == 0 amplitude = amplitude[] end
amplitude
end
"""
get_rank_size(n::Integer, size::Integer, rank::Integer)
Partition n items among processes of communicator of size size and return the size of the
given rank. Algorithm used is to:
1. Divide items equally among processes
2. Spread remainder over ranks in ascending order of rank number
"""
function get_rank_size(n::Integer, size::Integer, rank::Integer)
(n ÷ size) + ((n % size) >= (rank + 1))
end
"""
get_rank_start(n::Integer, size::Integer, rank::Integer)
Partition n items among processes of communicator of size size and return the starting of the
given rank. Algorithm used is to:
1. Divide items equally among processes
2. Spread remainder over ranks in ascending order of rank number
"""
function get_rank_start(n::Integer, size::Integer, rank::Integer)
start = rank * (n ÷ size) + 1
start + min(rank, n % size)
end
"""
get_rank_range(n::Integer, size::Integer, rank::Integer)
Partition n items among processes of communicator of size size and return a range over these
indices.
"""
function get_rank_range(n::Integer, size::Integer, rank::Integer)
start = get_rank_start(n, size, rank)
UnitRange(start, start + get_rank_size(n, size, rank) - 1)
end
"""
get_comm_range(comm::MPI.Comm, n::Integer)
Given a communicator and number of items, get the range for local rank
"""
function get_comm_range(comm::MPI.Comm, n::Integer)
get_rank_range(n, MPI.Comm_size(comm), MPI.Comm_rank(comm))
end
"""
ctxmap(f, ctx::QXMPIContext, items)
For each of the items in the local range apply the f function and return
the result.
"""
function ctxmap(f, ctx::QXMPIContext, items)
map(f, items[get_comm_range(ctx.root_comm, length(items))])
end
"""
ctxgather(ctx::QXMPIContext, items)
Gather local items to root rank
"""
function ctxgather(ctx::QXMPIContext, items)
if MPI.Comm_rank(ctx.sub_comm) == 0
return MPI.Gather(items, 0, ctx.root_comm)
end
end
"""
ctxreduce(f, ctx::QXMPIContext, items)
Reduce across items with funciton f
"""
function ctxreduce(f, ctx::QXMPIContext, items)
if MPI.Comm_rank(ctx.sub_comm) == 0
return MPI.Reduce(items, f, 0, MPI.root_comm)
end
end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 1990 | export SliceIterator
using QXContexts.ComputeGraphs
"""
Data structure that implements iterator interface for iterating over multi-dimensional
objects with configurable start and end points.
"""
struct SliceIterator
iter::CartesianIndices
start::Int
stop::Int
end
"""
SliceIterator(dims::Vector{Int}, start::Int=1, stop::Int=-1)
Constructor for slice iterator which takes dimensions as argument
"""
function SliceIterator(dims::Vector{Int}, start::Int=1, stop::Int=-1)
iter = CartesianIndices(Tuple(dims))
if stop == -1 stop = length(iter) end
SliceIterator(iter, start, stop)
end
"""
SliceIterator(cg::ComputeGraph, args..., num_bonds=nothing)
Constructor to initialise instance from compute graph object. Optional num_bonds argument
allows number of bonds used to be limited.
"""
function SliceIterator(cg::ComputeGraph, args...; max_slices=nothing)
slice_params = params(cg, ViewCommand)
num_slices = (max_slices === nothing) ? length(slice_params) : min(max_slices, length(slice_params))
dims = map(x -> slice_params[Symbol("v$(x)")], 1:num_slices)
SliceIterator(dims, args...)
end
"""
SliceIterator(si::SliceIterator, start::Int=1, end::Int=-1)
Constructor to initialise instance from an existing instance.
"""
function SliceIterator(si::SliceIterator, start::Int=1, stop::Int=-1)
new_start = si.start + start - 1
new_stop = si.start + stop - 1
@assert new_stop <= length(si.iter) "Stop index out of range"
SliceIterator(si.iter, new_start, new_stop)
end
"""Implement required iterator interface functions"""
Base.iterate(a::SliceIterator) = length(a) == 0 ? nothing : (Int[Tuple(a.iter[a.start])...], a.start)
Base.iterate(a::SliceIterator, state) = length(a) <= (state + 1 - a.start) ? nothing : (Int[Tuple(a.iter[state + 1])...], state + 1)
Base.length(a::SliceIterator) = a.stop - a.start + 1
Base.eltype(::SliceIterator) = Vector{Int}
Base.getindex(a::SliceIterator, i...) = collect(Tuple(a.iter[i...])) | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 368 | using QXContexts
root = dirname(dirname(@__DIR__))
prefixes = [joinpath(root, "examples/ghz/ghz_5"),
joinpath(root, "examples/rqc/rqc_4_4_24"),
joinpath(root, "examples/rqc/rqc_6_6_24")]
mktempdir() do path
for prefix in prefixes
execute(prefix * ".qx", prefix * ".yml", prefix * ".jld2", joinpath(path, "out.jld2"))
end
end
| QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 920 | # The following structure is adapted from the OpticSim.jl package @ # d492ca0
import Pkg, Libdl, PackageCompiler
function compile(sysimage_path = "JuliaSysimage.$(Libdl.dlext)"; dev=false)
env_to_precompile = dirname(dirname(@__DIR__))
precompile_execution_file = joinpath(@__DIR__, "precompile.jl")
project_filename = joinpath(env_to_precompile, "Project.toml")
project = Pkg.API.read_project(project_filename)
used_packages = Symbol.(collect(keys(project.deps)))
# Remove unneeded packages
filter!(x -> x ∉ [:Libdl, :PackageCompiler, :Pkg], used_packages)
if !dev
push!(used_packages, :QXContexts)
end
@info "Creating QXContexts.jl sysimg: $(sysimage_path)"
PackageCompiler.create_sysimage(
used_packages,
sysimage_path = sysimage_path,
project = env_to_precompile,
precompile_execution_file = precompile_execution_file
)
end
| QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 141 | using QXContexts
using Test
using TestSetExtensions
using Logging
@testset ExtendedTestSet "QXContexts.jl" begin
@includetests ARGS
end
| QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 1546 | module TestCLI
using Test
using FileIO
using DataStructures
include("utils.jl")
# include source of bin file here to avoid world age issues
include("../bin/qxrun.jl")
@testset "Test prepare rqc input cli script" begin
ghz_example_dir = joinpath(dirname(@__DIR__), "examples", "ghz")
dsl_input = joinpath(ghz_example_dir, "ghz_5.qx")
# create empty temporary directory
mktempdir() do path
output_fname = joinpath(path, "out.jld2")
args = ["-d", dsl_input,
"-o", output_fname]
main(args)
@test isfile(output_fname)
output = load(output_fname, "results")
@test all([x[2] ≈ ghz_results[x[1]] for x in zip(output...)])
end
mktempdir() do path
output_fname = joinpath(path, "out.jld2")
args = ["-d", dsl_input,
"-o", output_fname,
"--number-amplitudes", "1"]
main(args)
@test isfile(output_fname)
output = load(output_fname, "results")
@test length(output[1]) == 1
@test output[2][1] ≈ ghz_results["11001"]
end
mktempdir() do path
output_fname = joinpath(path, "out.jld2")
args = ["-d", dsl_input,
"-o", output_fname,
"--number-amplitudes", "2",
"--number-slices", "1"]
main(args)
@test isfile(output_fname)
output = load(output_fname, "results")
@test length(output) == 2
@test all([output[2][x] ≈ ghz_results[output[1][x]] for x in 1:2])
end
end
end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 3110 | module ComputeGraphTests
using Test
using QXContexts
include("utils.jl")
@testset "Compute Graph Tests" begin
@testset "Test commands" begin
read_io = c -> begin
io = IOBuffer()
write(io, c)
String(take!(io))
end
contract_example = "ncon t3 1,2,4 t1 1,2,3 t2 3,4"
c = ContractCommand(contract_example)
@test strip(read_io(c)) == contract_example # strip to remove \n
@test inputs(c) == [:t1, :t2]
@test output(c) == :t3
@test length(params(c)) == 0
load_example = "load t1 data_1 2,2"
c = LoadCommand(load_example)
@test strip(read_io(c)) == load_example # strip to remove \n
@test inputs(c) == []
@test output(c) == :t1
@test length(params(c)) == 0
save_example = "save result t3"
c = SaveCommand(save_example)
@test strip(read_io(c)) == save_example # strip to remove \n
@test inputs(c) == [:t3]
@test output(c) == :result
@test length(params(c)) == 0
reshape_example = "reshape t2 t1 1;2,3"
c = ReshapeCommand(reshape_example)
@test strip(read_io(c)) == reshape_example # strip to remove \n
@test inputs(c) == [:t1]
@test output(c) == :t2
@test length(params(c)) == 0
view_example = "view t1_s t1 v1 1 2"
c = ViewCommand(view_example)
@test strip(read_io(c)) == view_example # strip to remove \n
@test inputs(c) == [:t1]
@test output(c) == :t1_s
@test length(params(c)) == 1
@test params(c)[:v1] == 2
output_example = "output t1 2 2"
c = OutputCommand(output_example)
@test strip(read_io(c)) == output_example # strip to remove \n
@test inputs(c) == []
@test output(c) == :t1
@test length(params(c)) == 1
@test params(c)[:o2] == 2
end
@testset "Test build tree" begin
tree = build_tree(sample_cmds)
@test length(tree) == length(sample_cmds)
@test output(tree) == :result
# test get commands function
@test length(get_commands(tree, LoadCommand)) == 2
@test length(get_commands(tree, Union{LoadCommand, ReshapeCommand})) == 3
# test params
@test Set(collect(keys(params(tree)))) == Set([:o1, :v1])
# create compute graph
cg = ComputeGraph(tree, deepcopy(sample_tensors))
@test output(cg) == :result
end
@testset "Test dsl" begin
mktempdir() do path
tree = build_tree(sample_cmds)
fn = joinpath(path, "foo.qx")
open(fn, "w") do io
write(io, tree)
end
tree2, metadata = parse_dsl(fn)
@test length(tree) == length(tree2)
fn2 = joinpath(path, "foo2.qx")
open(fn2, "w") do io
write(io, tree2)
end
dsl_1 = open(fn, "r") do io read(io, String) end
dsl_2 = open(fn2, "r") do io read(io, String) end
@test dsl_1 == dsl_2
end
end
end
end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 1242 | module TestContexts
using Test
using QXContexts
include("utils.jl")
@testset "Test contexts module" begin
@testset "Test SliceIterator" begin
dims = [2,3,4]
si = SliceIterator(dims)
@test length(si) == prod(dims)
@test si[1] == [1,1,1]
@test si[dims...] == [2,3,4]
si = SliceIterator(dims, 1, 10)
@test length(si) == 10
si = SliceIterator(dims)
@test length(si) == prod(dims)
si2 = SliceIterator(si, 1, 5)
@test length(si2) == 5
si3 = SliceIterator(si2, 3, 5)
@test length(si3) == 3
end
@testset "Test QXContext" begin
cg = ComputeGraph(build_tree(sample_cmds), deepcopy(sample_tensors))
ctx = QXContext(cg)
# contract without setting view parameters
set_open_bonds!(ctx, "0")
output_0 = ctx()
@test size(output_0) == (2,)
set_open_bonds!(ctx, "1")
output_1 = ctx()
@test size(output_1) == (2,)
# contract while summing over view parameters
@test compute_amplitude!(ctx, "0") ≈ output_0
@test compute_amplitude!(ctx, "1") ≈ output_1
@test compute_amplitude!(ctx, "1", max_slices=0) ≈ output_1
end
end
end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 4536 | module LoggerTests
using Logging
using QXContexts.Logger
using Test
using Dates
using MPI
@testset "Logger Tests" begin
@testset "QXLogger" begin
@testset "INFO test" begin
io = IOBuffer()
global_logger(QXLogger(io; show_info_source=true))
@info "info_test"
log = split(String(take!(io)), "\n")[1:end-1]
df = DateFormat("[yyyy/mm/dd-HH:MM:SS.sss]");
for l in log
log_elem = split(l, " ")
em = collect(eachmatch(r"\[(.*?)\]", log_elem[1])) # capture values in []
@test DateTime(em[1].match, df) !== nothing
@test match(r"[rank=/\d+/]", em[2].match) !== nothing
@test em[3].match == "[host=$(gethostname())]"
@test log_elem[2] == "INFO"
@test log_elem[3] == "info_test"
end
end
@testset "WARN test" begin
io = IOBuffer()
global_logger(QXLogger(io; show_info_source=true))
@warn "warn_test"; line_num = @__LINE__
log = split(String(take!(io)), "\n")[1:end-1]
df = DateFormat("[yyyy/mm/dd-HH:MM:SS.sss]");
for l in log
log_elem = split(l, " ")
em = collect(eachmatch(r"\[(.*?)\]", log_elem[1])) # capture values in []
@test DateTime(em[1].match, df) !== nothing
@test match(r"[rank=/\d+/]", em[2].match) !== nothing
@test em[3].match == "[host=$(gethostname())]"
@test log_elem[2] == "WARN"
@test log_elem[3] == "warn_test"
@test log_elem[4] == "-@->"
file_line = splitext(log_elem[5])
@test file_line[1] == splitext(@__FILE__)[1]
@test split(file_line[2], ":")[2] == string(line_num)
end
end
@testset "ERROR test" begin
io = IOBuffer()
global_logger(QXLogger(io; show_info_source=true))
line_num = @__LINE__; @error "error_test"
log = split(String(take!(io)), "\n")[1:end-1]
df = DateFormat("[yyyy/mm/dd-HH:MM:SS.sss]");
for l in log
log_elem = split(l, " ")
em = collect(eachmatch(r"\[(.*?)\]", log_elem[1])) # capture values in []
@test DateTime(em[1].match, df) !== nothing
@test match(r"[rank=/\d+/]", em[2].match) !== nothing
@test em[3].match == "[host=$(gethostname())]"
@test log_elem[2] == "ERROR"
@test log_elem[3] == "error_test"
@test log_elem[4] == "-@->"
file_line = splitext(log_elem[5])
@test file_line[1] == splitext(@__FILE__)[1]
@test split(file_line[2], ":")[2] == string(line_num)
end
end
end
@testset "QXLoggerMPIPerRank" begin
@testset "WARN test" begin
mktempdir() do path
# Fail to create logger if MPI not initialised
if !MPI.Initialized()
@test_throws """MPI is required for this logger. Pleasure ensure MPI is initialised. Use `QXLogger` for non-distributed logging""" QXLoggerMPIPerRank()
MPI.Init()
end
global_logger(QXLoggerMPIPerRank(; show_info_source=true, path=path))
line_num = @__LINE__; @warn "warn_test"
df = DateFormat("[yyyy/mm/dd-HH:MM:SS.sss]");
@test isdir(joinpath(path, "QXContexts_io_" * string(global_logger().session_id)))
log = readlines(joinpath(path, "QXContexts_io_" * string(global_logger().session_id), "rank_0.log"))
for l in log
log_elem = split(l, " ")
em = collect(eachmatch(r"\[(.*?)\]", log_elem[1])) # capture values in []
@test DateTime(em[1].match, df) !== nothing
@test match(r"[rank=/\d+/]", em[2].match) !== nothing
@test em[3].match == "[host=$(gethostname())]"
@test log_elem[2] == "WARN"
@test log_elem[3] == "warn_test"
@test log_elem[4] == "-@->"
file_line = splitext(log_elem[5])
@test file_line[1] == splitext(@__FILE__)[1]
@test split(file_line[2], ":")[2] == string(line_num)
end
end
end
end
end
end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 803 | # using Random
# @testset "Test MPI specific portions" begin
# @testset "Test get_rank_size and get_rank_start" begin
# rng = MersenneTwister(42)
# ns = rand(1:2000000, 5)
# ms = rand(1:10, 5)
# # test that the sum of sizes on each rank sum to total
# for n in ns
# for m in ms
# @test sum(map(x -> get_rank_size(n, m, x), 0:m-1)) == n
# end
# end
# # test that the start of the next rank is at the start of previous rank plus the size
# for n in ns
# for m in ms
# test_rank = x -> (get_rank_start(n, m, x-1) + get_rank_size(n, m, x-1) == get_rank_start(n, m, x))
# @test all(map(test_rank, 1:m-1))
# end
# end
# end
# end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 1785 | module SamplingTests
using FileIO
using Test
using QXContexts
include("utils.jl")
@testset "Sampling tests" begin
test_path = dirname(@__DIR__)
dsl_file = joinpath(test_path, "examples/ghz/ghz_5.qx")
input_file = joinpath(test_path, "examples/ghz/ghz_5.jld2")
param_file = joinpath(test_path, "examples/ghz/ghz_5.yml")
mktempdir() do path
output_file = joinpath(path, "out.jld2")
execute(dsl_file, input_file, param_file, output_file)
# ensure all dictionary entries match
output = load(output_file, "results")
@test output[1] == collect(keys(ghz_results))
@test output[2] ≈ collect(values(ghz_results))
end
# Test rejection sampling
param_file = joinpath(test_path, "examples/ghz/ghz_5_rejection.yml")
mktempdir() do path
output_file = joinpath(path, "out.jld2")
execute(dsl_file, input_file, param_file, output_file)
# ensure all dictionary entries match
output = FileIO.load(output_file, "results")
@test length(output) == 10 # Should only have 10 samples
@test length(unique(output)) == 2 # output should only contain strings "11111" and "00000"
end
# Test uniform sampling
param_file = joinpath(test_path, "examples/ghz/ghz_5_uniform.yml")
mktempdir() do path
output_file = joinpath(path, "out.jld2")
execute(dsl_file, input_file, param_file, output_file)
# ensure all dictionary entries match
output = FileIO.load(output_file, "results")
@test length(output[1]) == 10 # Should only have 10 samples
@test typeof(output[1][1]) == String
@test length(output[2]) == 10 # should only have 10 amplitudes
@test typeof(output[2][1]) <: Complex
end
end
end | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | code | 1093 | # Some useful data structures that are used in multiple tests
using DataStructures
using QXContexts
# expected results for ghz exmaple files included in examples/ghz folder
ghz_results = OrderedDict{String, ComplexF32}(
"11001" => 0 + 0im,
"10000" => 0 + 0im,
"00011" => 0 + 0im,
"00000" => 1/sqrt(2) + 0im,
"11000" => 0 + 0im,
"10010" => 0 + 0im,
"10111" => 0 + 0im,
"01010" => 0 + 0im,
"01101" => 0 + 0im,
"11111" => 1/sqrt(2) + 0im,
)
# sample set of commands used for testing compute graph
sample_cmds = AbstractCommand[
LoadCommand(:t1, :data_1, [2,2,2]),
ReshapeCommand(:t1_r, :t1, [[1],[2,3]]),
LoadCommand(:t2, :data_2, [4,2]),
ViewCommand(:t1_s, :t1_r, :v1, 2, 4),
ViewCommand(:t2_s, :t2, :v1, 1, 4),
OutputCommand(:t3, 1, 2),
ContractCommand(:t4, [1,3], :t1_s, [1,2], :t2_s, [2,3]),
ContractCommand(:t5, [2], :t3, [1], :t4, [1,2]),
SaveCommand(:result, :t5)
]
# matching tensors for sample commands
sample_tensors = Dict{Symbol, AbstractArray}(
:data_1 => rand(2,2,2),
:data_2 => rand(4,2)
)
| QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | docs | 2128 | # Contributing to QuantEx
We welcome contributions and here we lay out some guidelines which should be followed to make the process more streamlined for all involved.
## Contribution process
Commits should not be pushed directly to the master branch but should instead be merged from feature branches following via pull/merge requests. To track
tasks, features, bugs and enhancements should have a corresponding issue which explains the motivation and logic used in the committed code/documentation.
The steps in the full process from creating an issue to merging are:
1. Create issue with quick description of feature, bug, enhancement etc.. This will be assigned an issue number
2. Create a branch with a name that starts with the issue number and gives a concise description of the issue
3. Make the necessary changes to the branch. Commit messages should follow imperative style ("Fix bug" vs "Fixed bug"). Further guidelines for commit messages [here](https://gist.github.com/robertpainsi/b632364184e70900af4ab688decf6f53)
4. Create a merge/pull request requesting to merge the changes into the master branch and select the appropraite merge/pull request template, prefix the issue name with `WIP:` to indicate that it is a work in progress
5. The merge/pull request template has a number of check list items which should be satisfied before the request can be merged. These include:
- All discussions are resolved
- New code is covered by appropriate tests
- Tests are passing locally and on CI
- The documentation is consistent with changes
- Any code that was copied from other sources has the paper/url in a comment and is compatible with the MIT licence
- Notebooks/examples not covered by unittests have been tested and updated as required
- The feature branch is up-to-date with the master branch (rebase if behind)
- Incremented the version string in Project.toml file
6. Once all the items on the checklist have been addressed the `WIP:` prefix should be removed and the merge/pull request should be reviewed by one of the developement team and merged if accepted or comments added explaining rationale if not | QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | docs | 7455 | # QXContexts
[](https://JuliaQX.github.io/QXContexts.jl/stable)
[](https://JuliaQX.github.io/QXContexts.jl/dev)
[](https://github.com/JuliaQX/QXContexts.jl/actions)
[](https://codecov.io/gh/JuliaQX/QXContexts.jl)
QXContexts is a Julia package for simulating quantum circuits using tensor network approaches and targeting large distributed memory clusters with hardware accelerators. It was developed as part of the QuantEx project, one of the individual software projects of WP8 of PRACE 6IP.
QXContexts is one of a family of packages each with a different aim. QXContexts is the package that is designed to the do the bulk of the computations and makes use of distributed compute resources via [MPI.jl](https://github.com/JuliaParallel/MPI.jl) as well as hardware accelerators. [OMEinsum.jl](https://github.com/under-Peter/OMEinsum.jl) and [TensorOperations.jl](https://github.com/Jutho/TensorOperations.jl) are currently used to carry out the tensor contraction operations.
# Installation
QXContexts is a Julia package and can be installed using Julia's inbuilt package manager from the Julia REPL using.
```
import Pkg
Pkg.add("QXContexts")
```
or directly from the github repository with
```
import Pkg
Pkg.add(url="https://github.com/JuliaQX/QXContexts.jl")
```
## Custom system image
Using a custom system image can greatly reduce the latency when starting computations.
To build a custom system image one can run the following commands from the Julia REPL
```
import QXContexts
QXContexts.compile()
```
This can take up to a half hour to compile and will produce a shared object system image file in the root folder of the project.
This will have a different suffix depending on the platform (`.so` for Linux systems, `.dylib` for macOS and `.dll` for windows systems).
To use the system image the `--sysimage` (or equivalently `-J`) can be used providing the path to the system image. For example
```
julia --project --sysimage=JuliaSysimage.so
```
For development it is useful to use a system image without any of the functions from QXContexts itself begin compiled.
To do this one can call the compile function with `dev` set to true as
```
QXContexts.compile(dev=true)
```
# Example usage
QXContexts uses input files generated by QXTools which describe the computation to be performed.
An example of the input files for a five qubit GHZ circuit are provided in the `examples/ghz` folder.
This example can be run directly using the `examples/ghz_example.jl` script or this can be run using the CLI `bin/qxrun.jl` script with the following command
```
julia --project bin/qxrun.jl -d examples/ghz/ghz_5.qx\
-i examples/ghz/ghz_5.jld2\
-p examples/ghz/ghz_5.yml\
-o examples/ghz/out.jld2
```
where the `-d`, `-i` and `-p` switches describe the DSL file, input data file and parameter file to use respectively.
The `-o` switch refers to the output file.
If all three files have the same prefix, then it is only necessary to provide the name of the dsl file so the example could also be run with the command
```
julia --project bin/qxrun.jl -d examples/ghz/ghz_5.qx -o examples/ghz/out.jld2
```
The output is written to a [JLD2](https://github.com/JuliaIO/JLD2.jl) file.
A small utility script called `examine_output.jl` is provided that allows examination of this output which
can be used as
```
julia --project bin/examine_output.jl examples/ghz/out.jld2
```
## Enable timing
To get timing information on the different sections of the code the code has been instrumented with [TimerOutputs.jl](https://github.com/KristofferC/TimerOutputs.jl). To enable this one can add the `--timings` (or `-t`) switch to the CLI command.
```
julia --project bin/qxrun.jl -d examples/ghz/ghz_5.qx -o examples/ghz/out.jld2 -t
```
## Enable debugging
To get detailed debugging information one can include the package name in the `JULIA_DEBUG` environment variable. For example
```
JULIA_DEBUG=QXContexts julia --project bin/qxrun.jl -d examples/ghz/ghz_5.qx -o examples/ghz/out.jld2
```
This generates very verbose output so care should be taking when using this for large runs.
## Enable logging
To log debug and performance information to files QXContexts has 3 logger-models:
- QXLogger: the default stdout logger: useful for single node, single process logging (interactive)
- QXLoggerMPIShared: an MPI-IO shared-file logger: all MPI ranks share a single file for writing their respective logs; blocking.
- QXLoggerMPIPerRank: MPI-enabled file per rank logger: non-blocking debug files created per MPI rank.
The loggers can be (individually) instantiated by selecting the global logger to use with one of the following:
```
global_logger(QXContexts.Logger.QXLogger())
global_logger(QXContexts.Logger.QXLoggerMPIShared())
global_logger(QXContexts.Logger.QXLoggerMPIPerRank())
```
# Running with MPI
MPI is used to use multiple processes for computation. The `mpiexecjl` script can be used to launch Julia on multiple processes. See [MPI.jl documentation](https://juliaparallel.github.io/MPI.jl/latest/configuration/#Julia-wrapper-for-mpiexec) for details on how to set this up. For example to run the above example with 4 processes one would use the following:
```
mpiexecjl --project -n 4 julia bin/qxrun.jl -d examples/ghz/ghz_5.qx -o examples/ghz/out.jld2
```
In this case the amplitudes that are to be calculated are split between the processes. For
larger cases where many partitions are used for each amplitude it can be useful to split
this calculation over many processes also. The `--sub-communicator-size` (or `-m`) option
can be used to specify the size of sub-communicators to use for each amplitude. For example
```
mpiexecjl --project -n 4 julia bin/qxrun.jl -d examples/ghz/ghz_5.qx -o examples/ghz/out.jld2 -m 2
```
Here the four processes are split between two communicators, each with two processes.
# Using GPUs
On systems with NVIDIA GPUs, these can be used by passing a `--gpu` (or `-g`) flag to `qxrun.jl` on the command line.
# Contributing
Contributions from users are welcome and we encourage users to open issues and submit merge/pull requests for any problems or feature requests they have. The
[CONTRIBUTING.md](CONTRIBUTION.md) has further details of the contribution guidelines.
# Building documentation
QXSim.jl using [Documenter.jl](https://juliadocs.github.io/Documenter.jl/stable/) to generate documentation. To build
the documentation locally run the following from the top-level folder.
The first time it is will be necessary to instantiate the environment to install dependencies
```
julia --project=docs 'using Pkg; Pkg.develop(PackageSpec(path=pwd())); Pkg.instantiate()'
```
and then to build the documentation
```
julia --project=docs docs/make.jl
```
To serve the generated documentation locally use
```
julia --project=docs -e 'using Pkg; Pkg.add("LiveServer"); using LiveServer; serve(dir="docs/build")'
```
Or with python3 using from the `docs/build` folder using
```
python3 -m http.server
```
The generated documentation should now be viewable locally in a browser at `http://localhost:8000`.
| QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | docs | 740 | ### Summary
Short summary of the changes.
### Rationale
Explain the rationale for the changes (links to relevant issue(s)).
#### Implementation Details
#### Additional notes
<hr>
***Check before merging:***
- [ ] All discussions are resolved
- [ ] New code is covered by appropriate tests
- [ ] Tests are passing locally and on CI
- [ ] The documentation is consistent with changes
- [ ] Any code that was copied from other sources has the paper/url in a comment and is compatible with the MIT licence
- [ ] Notebooks/examples not covered by unittests have been tested and updated as required
- [ ] The feature branch is up-to-date with the master branch (rebase if behind)
- [ ] Incremented the version string in Project.toml file
| QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | docs | 740 | ### Summary
Short summary of the changes.
### Rationale
Explain the rationale for the changes (links to relevant issue(s)).
#### Implementation Details
#### Additional notes
<hr>
***Check before merging:***
- [ ] All discussions are resolved
- [ ] New code is covered by appropriate tests
- [ ] Tests are passing locally and on CI
- [ ] The documentation is consistent with changes
- [ ] Any code that was copied from other sources has the paper/url in a comment and is compatible with the MIT licence
- [ ] Notebooks/examples not covered by unittests have been tested and updated as required
- [ ] The feature branch is up-to-date with the master branch (rebase if behind)
- [ ] Incremented the version string in Project.toml file
| QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 1.0.0 | ca72251e220e0a100c56c549ddf56a7ec9cf49a0 | docs | 6258 | ```@meta
CurrentModule = QXContexts
```
# QXContexts
QXContexts is a Julia package for simulating quantum circuits using tensor networking approaches targeting large distributed memory clusters with hardware accelerators. It was developed as part of the [QuantEx](https://juliaqx.github.io/QXTools.jl/stable/) project, one of the individual software projects of WP8 of PRACE 6IP.
QXContexts is one of a family of packages under the [JuliaQX](https://github.com/JuliaQX) organization and aims to execute tensor network based simulations of quantum circuits generated by [QXTools](https://github.com/JuliaQX/QXTools.jl). It is designed to make use of distributed compute resources via [MPI.jl](https://github.com/JuliaParallel/MPI.jl) as well as hardware accelerators. [OMEinsum.jl](https://github.com/under-Peter/OMEinsum.jl) is currently used to carry out the tensor contraction operations.
# Installation
QXContexts is a Julia package and can be installed using Julia's inbuilt package manager from the Julia REPL using.
```
import Pkg
Pkg.add("QXContexts")
```
or directly from the github repository with
```
import Pkg
Pkg.add(url="https://github.com/JuliaQX/QXContexts.jl")
```
## Custom system image
Using a custom system image can greatly reduce the latency when starting computations.
To build a custom system image one can run the following commands from the Julia REPL
```
import QXContexts
QXContexts.compile()
```
This can take up to a half hour to compile and will produce a shared object system image file in the root folder of the project.
This will have a different suffix depending on the platform (`.so` for Linux systems, `.dylib` for macOS and `.dll` for windows systems).
To use the system image the `--sysimage` (`-J` for short) can be used providing the path to the system image. For example
```
julia --project --sysimage=JuliaSysimage.so
```
For development it is useful to use a system image without any of the functions from QXContexts itself begin compiled.
To do this one can call the compile function with `dev` set to true as
```
QXContexts.compile(dev=true)
```
# Example usage
QXContexts uses input files generated by QXSim which describe the computation to be performed. An example of the input files for a five qubit GHZ circuit are provided
in the `examples/ghz` folder.
This example can be run directly using the `examples/ghz_example.jl` script or this can be run using the CLI `bin/qxrun.jl` script with the following command
```
julia --project bin/qxrun.jl -d examples/ghz/ghz_5.qx\
-i examples/ghz/ghz_5.jld2\
-p examples/ghz/ghz_5.yml\
-o examples/ghz/out.jld2
```
where the `-d`, `-i` and `-p` switches describe the DSL file, input data file and parameter file to use respectively. The `-o` switch refers to the output file. If all three files have the same prefix, then it is only necessary to provide the name of the dsl file so the example could also be run with the command
```
julia --project bin/qxrun.jl -d examples/ghz/ghz_5.qx -o examples/ghz/out.jld2
```
The output is written to a [JLD2](https://github.com/JuliaIO/JLD2.jl) file. A small utility
script called `examine_output.jl` is provided that allows examination of this output which
can be used as
```
julia --project bin/examine_output.jl examples/ghz/out.jld2
```
## Enable timing
To get timing information on the different sections of the code the code has been instrumented with [TimerOutputs.jl](https://github.com/KristofferC/TimerOutputs.jl). To enable this one can add the `--timings` (or `-t`) switch to the CLI command.
```
julia --project bin/qxrun.jl -d examples/ghz/ghz_5.qx -o examples/ghz/out.jld2 -t
```
## Enable debugging
To get detailed debugging information one can include the package name in the `JULIA_DEBUG` environment variable. For example
```
JULIA_DEBUG=QXContexts julia --project bin/qxrun.jl -d examples/ghz/ghz_5.qx -o examples/ghz/out.jld2
```
This generates very verbose output so care should be taking when using this for large runs.
## Enable logging
To log debug and performance information to files QXContexts has 3 logger-models:
- QXLogger: the default stdout logger: useful for single node, single process logging (interactive)
- QXLoggerMPIShared: an MPI-IO shared-file logger: all MPI ranks share a single file for writing their respective logs; blocking.
- QXLoggerMPIPerRank: MPI-enabled file per rank logger: non-blocking debug files created per MPI rank.
The loggers can be (individually) instantiated by selecting the global logger to use with one of the following:
```
global_logger(QXContexts.Logger.QXLogger())
global_logger(QXContexts.Logger.QXLoggerMPIShared())
global_logger(QXContexts.Logger.QXLoggerMPIPerRank())
```
# Running with MPI
MPI is used to use multiple processes for computation. The `mpiexecjl` script can be used to launch Julia on multiple processes. See [MPI.jl documentation](https://juliaparallel.github.io/MPI.jl/latest/configuration/#Julia-wrapper-for-mpiexec) for details on how to set this up. For example to run the above example with 4 processes one would use the following:
```
mpiexecjl --project -n 4 julia bin/qxrun.jl -d examples/ghz/ghz_5.qx -o examples/ghz/out.jld2
```
In this case the amplitudes that are to be calculated are split between the processes. For
larger cases where many partitions are used for each amplitude it can be useful to split
this calculation over many processes also. The `--sub-communicator-size` (or `-m`) option
can be used to specify the size of sub-communicators to use for each amplitude. For example
```
mpiexecjl --project -n 4 julia bin/qxrun.jl -d examples/ghz/ghz_5.qx -o examples/ghz/out.jld2 -m 2
```
Here the four processes are split between two communicators, each with two processes.
# Using GPUs
On systems with NVIDIA GPUs, these can be used by passing a `--gpu` (or `-g`) flag to `qxrun.jl` on the command line.
# Contributing
Contributions from users are welcome and we encourage users to open issues and submit merge/pull requests for any problems or feature requests they have. The
CONTRIBUTING.md on the top level of the source folder has further details of the contribution guidelines.
| QXContexts | https://github.com/JuliaQX/QXContexts.jl.git |
|
[
"MIT"
] | 0.1.0 | d00d2939bb39c5cbc6ef71bc2c594ead1a10a039 | code | 690 | using WeightedPCA
using Documenter
DocMeta.setdocmeta!(WeightedPCA, :DocTestSetup, :(using WeightedPCA); recursive=true)
makedocs(;
modules=[WeightedPCA],
authors="David Hong <[email protected]> and contributors",
repo="https://github.com/dahong67/WeightedPCA.jl/blob/{commit}{path}#{line}",
sitename="WeightedPCA.jl",
format=Documenter.HTML(;
prettyurls=get(ENV, "CI", "false") == "true",
canonical="https://dahong67.github.io/WeightedPCA.jl",
edit_link="master",
assets=String[],
),
pages=[
"Home" => "index.md",
],
)
deploydocs(;
repo="github.com/dahong67/WeightedPCA.jl",
devbranch="master",
)
| WeightedPCA | https://github.com/dahong67/WeightedPCA.jl.git |
|
[
"MIT"
] | 0.1.0 | d00d2939bb39c5cbc6ef71bc2c594ead1a10a039 | code | 1483 | "Weighted PCA module. Provides weighted principal component analysis (PCA) for data with samples of heterogeneous quality (heteroscedastic noise)."
module WeightedPCA
using LinearAlgebra: norm, svd, svdvals
export wpca
export UniformWeights, InverseVarianceWeights, OptimalWeights
# Main function
"""
wpca(Y, i, weights=UniformWeights())
Compute `i`th principal component of data `Y` via weighted PCA using `weights`,
i.e., output is the `i`th eigenvector of the weighted sample covariance
`Σ_l w[l] Y[l]*Y[l]'`.
Data `Y` is a list of matrices (each column is a sample).
# Choices for `weights`
+ `UniformWeights()` : uniform weights, i.e., `w[l] = 1` [default]
+ `InverseVarianceWeights([v])` : inverse noise variance weights, i.e., `w[l] = 1/v[l]`
+ `OptimalWeights([v,λ])` : optimal weights for signal with variance `λ`, i.e., `w[l] = 1/v[l] * 1/(1+v[l]/λ)`
The `weights` can also be manually set by passing in an `AbstractVector{<:Real}`.
See also: [`UniformWeights`](@ref), [`InverseVarianceWeights`](@ref), [`OptimalWeights`](@ref).
"""
wpca(Y, i::Integer, weights=UniformWeights()) = _wpca(Y, i, weights)
function _wpca(Y, i, w::AbstractVector{<:Real})
axes(Y) == axes(w) || throw(DimensionMismatch("`axes(Y)` must match `axes(w)`"))
Yw = reduce(hcat, sqrt(wl) * Yl for (wl, Yl) in zip(w, Y))
return svd(Yw).U[:, i]
end
# Variance estimators and weighted PCA with computed weights
include("variance-estimators.jl")
include("computed-weights.jl")
end
| WeightedPCA | https://github.com/dahong67/WeightedPCA.jl.git |
|
[
"MIT"
] | 0.1.0 | d00d2939bb39c5cbc6ef71bc2c594ead1a10a039 | code | 2747 | ## Weighted PCA with computed weights
# Abstract type for computed weights
"""
ComputedWeights
Abstract supertype for weights that are computed from properties of the data.
"""
abstract type ComputedWeights end
# Uniform weights
"""
UniformWeights <: ComputedWeights
Uniform weighting, i.e., `w[l] = 1`.
Corresponds to conventional (unweighted) PCA.
"""
struct UniformWeights <: ComputedWeights end
_wpca(Y, i, ::UniformWeights) =
_wpca(Y, i, [one(eltype(eltype(Y))) for _ in 1:length(Y)])
# Inverse noise variance weights
"""
InverseVarianceWeights <: ComputedWeights
Inverse noise variance weighting, i.e., `w[l] = 1/v[l]`.
# Constructors
+ `InverseVarianceWeights(v=noisevar)` for known noise variances `noisevar`
+ `InverseVarianceWeights()` for unknown noise variances; noise variances will be estimated from data
"""
struct InverseVarianceWeights{Tv} <: ComputedWeights
v::Tv # vector of noise variances
end
InverseVarianceWeights(; v=NoiseNormEstimator()) =
InverseVarianceWeights(v)
_wpca(Y, i, method::InverseVarianceWeights{<:AbstractVector{<:Real}}) =
_wpca(Y, i, inv.(method.v))
_wpca(Y, i, method::InverseVarianceWeights{<:AbstractNoiseVarEstimator}) =
_wpca(Y, i, InverseVarianceWeights(estimatev(Y, method.v)))
# Optimal weights
"""
OptimalWeights <: ComputedWeights
Optimal weighting, i.e., `w[l] = 1/v[l] * 1/(1+v[l]/λ)`.
# Constructors
+ `OptimalWeights(v=noisevar, λ=signalvar)` for known noise variances `noisevar` and signal variance `signalvar`
+ `OptimalWeights(λ=signalvar)` for known signal variance `signalvar`; noise variances will be estimated from data
+ `OptimalWeights(v=noisevar)` for known noise variances `noisevar`; signal variance will be estimated from data
+ `OptimalWeights()` for unknown noise and signal variances; noise and signal variances will be estimated from data
"""
struct OptimalWeights{Tv,Tλ} <: ComputedWeights
v::Tv # vector of noise variances
λ::Tλ # signal variance
end
OptimalWeights(; v=NoiseNormEstimator(), λ=InvNoiseWeightedShrinkageEstimator()) =
OptimalWeights(v, λ)
_wpca(Y, i::Number, method::OptimalWeights{<:AbstractVector{<:Real},<:Real}) =
_wpca(Y, i, inv.(method.v) .* inv.(one(method.λ) .+ method.v ./ method.λ))
_wpca(Y, i::Number, method::OptimalWeights{<:AbstractNoiseVarEstimator,<:Real}) =
_wpca(Y, i, OptimalWeights(estimatev(Y, method.v), method.λ))
_wpca(Y, i::Number, method::OptimalWeights{<:AbstractVector{<:Real},<:AbstractSignalVarEstimator}) =
_wpca(Y, i, OptimalWeights(method.v, estimateλ(Y, i, method.v, method.λ)))
_wpca(Y, i, method::OptimalWeights{<:AbstractNoiseVarEstimator,<:AbstractSignalVarEstimator}) =
_wpca(Y, i, OptimalWeights(estimatev(Y, method.v), method.λ))
| WeightedPCA | https://github.com/dahong67/WeightedPCA.jl.git |
|
[
"MIT"
] | 0.1.0 | d00d2939bb39c5cbc6ef71bc2c594ead1a10a039 | code | 2672 | ## Variance estimators
# Noise variance estimators
"""
AbstractNoiseVarEstimator
Abstract supertype for noise variance estimators.
Should implement `estimatev`.
"""
abstract type AbstractNoiseVarEstimator end
"""
estimatev(Y, method::AbstractNoiseVarEstimator) -> Vector{<:Real}
Estimate the vector of noise variances from data `Y`
using the estimator `method`.
"""
function estimatev end
"""
NoiseNormEstimator <: AbstractNoiseVarEstimator
Norm-based estimate of noise variance: `vh[l] = norm(Y[l])^2/length(Y[l])`
"""
struct NoiseNormEstimator <: AbstractNoiseVarEstimator end
estimatev(Y, ::NoiseNormEstimator) = [norm(Yl)^2 / length(Yl) for Yl in Y]
# Signal variance estimators
"""
AbstractSignalVarEstimator
Abstract supertype for signal variance estimators.
Should implement `estimateλ`.
"""
abstract type AbstractSignalVarEstimator end
"""
estimateλ(Y, v, method::AbstractSignalVarEstimator) -> Vector{<:Real}
Estimate the vector of signal variances from data `Y`
using the estimator `method`.
"""
function estimateλ end
"""
estimateλ(Y, i, v, method::AbstractSignalVarEstimator) -> Real
Estimate the `i`th signal variance from data `Y`
using the estimator `method`.
"""
function estimateλ(Y, i::Integer, v, method::AbstractSignalVarEstimator)
λ = estimateλ(Y, v, method)
i <= length(λ) || error("could not estimate λi, component may be too weak")
return λ[i]
end
"""
InvNoiseWeightedShrinkageEstimator <: AbstractSignalVarEstimator
Inverse noise variance weighted shrinkage-based estimate of signal variances:
`λh[i] = Ξ(λhinv[i])`
+ `Ξ(λ) = -(vb+vb/c-λ)/2 + sqrt((vb+vb/c-λ)^2-4*vb^2/c)/2` is the shrinkage
+ `vb = ( Σ_l p[l]/v[l] )^(-1)` where `p[l] = n[l]/n`
+ `c = n/d` is the data aspect ratio
+ `λhinv = eigvals( Σ_l (1/v[l])/(n[1]/v[1]+⋯+n[L]/v[L]) Y[l]*Y[l]' )`
are inverse noise variance weighted eigenvalues
"""
struct InvNoiseWeightedShrinkageEstimator <: AbstractSignalVarEstimator end
function estimateλ(Y, v, ::InvNoiseWeightedShrinkageEstimator)
# Shrinkage function
Ξ = (λ, v, c) -> -(v + v / c - λ) / 2 + sqrt((v + v / c - λ)^2 - 4 * v^2 / c) / 2
# Compute inverse noise variance weighted eigenvalues
n, L = size.(Y, 2), length(Y)
w = [(1 / v[l]) / sum(n[lp] / v[lp] for lp in 1:L) for l in 1:L]
Yw = reduce(hcat, sqrt(w[l]) * Y[l] for l in 1:L)
λhinv = svdvals(Yw) .^ 2
# Compute number of components above (inv weighted) phase transition
d = (only ∘ unique)(size.(Y, 1))
p, c = n ./ sum(n), sum(n) / d
vb = inv(sum(p[l] / v[l] for l in 1:L))
k = count(>(vb * (1 + 1 / sqrt(c))^2), λhinv)
return [Ξ(λhinv[i], vb, c) for i in 1:k]
end
| WeightedPCA | https://github.com/dahong67/WeightedPCA.jl.git |
|
[
"MIT"
] | 0.1.0 | d00d2939bb39c5cbc6ef71bc2c594ead1a10a039 | code | 784 | ## Reference implementations
module RefImp
using LinearAlgebra
function vest(Y)
return map(Y) do Yl
d, nl = size(Yl)
return norm(Yl)^2 / (d * nl)
end
end
function λest_inv(Y, v)
n, L = size.(Y, 2), length(Y)
w = [(1 / v[l]) / sum(n[lp] / v[lp] for lp in 1:L) for l in 1:L]
Yw = reduce(hcat, sqrt(w[l]) * Y[l] for l in 1:L)
return svdvals(Yw) .^ 2
end
Ξ(λ, v; c) = -(v + v / c - λ) / 2 + sqrt((v + v / c - λ)^2 - 4 * v^2 / c) / 2
function λest(Y; v=vest(Y))
d, n, L = (only ∘ unique)(size.(Y, 1)), size.(Y, 2), length(Y)
p = n ./ sum(n)
λinv = λest_inv(Y, v)
c = sum(n) / d
vb = inv(sum(p[l] / v[l] for l in 1:L))
k = count(>(vb * (1 + 1 / sqrt(c))^2), λinv)
return [Ξ(λinv[i], vb; c) for i in 1:k]
end
end
| WeightedPCA | https://github.com/dahong67/WeightedPCA.jl.git |
|
[
"MIT"
] | 0.1.0 | d00d2939bb39c5cbc6ef71bc2c594ead1a10a039 | code | 5241 | using WeightedPCA
using Test
using WeightedPCA: estimatev, NoiseNormEstimator
using WeightedPCA: estimateλ, InvNoiseWeightedShrinkageEstimator
using LinearAlgebra
using StableRNGs
include("ref-imp.jl") # reference implementations
@testset "Noise variance estimators" begin
rng = StableRNG(0)
# Data setup
c, v = [4, 8], [1, 3]
λ1 = 1
d = 20
n = c .* d
k = 1
L = length.((c, v)) |> only ∘ unique
# Generate data
u1 = normalize(randn(rng, d))
F = reshape(sqrt(λ1) * u1, :, 1)
Y = [F * randn(rng, k, n[l]) + sqrt(v[l]) * randn(rng, d, n[l]) for l in 1:L]
@testset "NoiseNormEstimator" begin
@test estimatev(Y, NoiseNormEstimator()) == RefImp.vest(Y)
end
end
@testset "Signal variance estimators" begin
rng = StableRNG(0)
# Data setup
c, v = [4, 8], [1, 3]
λ1 = 1
d = 20
n = c .* d
k = 1
L = length.((c, v)) |> only ∘ unique
# Generate data
u1 = normalize(randn(rng, d))
F = reshape(sqrt(λ1) * u1, :, 1)
Y = [F * randn(rng, k, n[l]) + sqrt(v[l]) * randn(rng, d, n[l]) for l in 1:L]
@testset "InvNoiseWeightedShrinkageEstimator" begin
# true v
λr = RefImp.λest(Y; v=v)
@test estimateλ(Y, v, InvNoiseWeightedShrinkageEstimator()) == λr
for i in axes(λr, 1)
@test estimateλ(Y, i, v, InvNoiseWeightedShrinkageEstimator()) == λr[i]
end
@test_throws ErrorException estimateλ(Y, k + 1, v, InvNoiseWeightedShrinkageEstimator())
# estimated v
λr = RefImp.λest(Y)
@test estimateλ(Y, estimatev(Y, NoiseNormEstimator()), InvNoiseWeightedShrinkageEstimator()) == λr
for i in axes(λr, 1)
@test estimateλ(Y, i, estimatev(Y, NoiseNormEstimator()), InvNoiseWeightedShrinkageEstimator()) == λr[i]
end
@test_throws ErrorException estimateλ(Y, k + 1, estimatev(Y, NoiseNormEstimator()), InvNoiseWeightedShrinkageEstimator())
end
end
@testset "Weighted PCA: manually set weights" begin
rng = StableRNG(0)
# Data setup
c, v = [4, 8], [1, 3]
λ1 = 1
d = 20
n = c .* d
k = 1
L = length.((c, v)) |> only ∘ unique
# Generate data
u1 = normalize(randn(rng, d))
F = reshape(sqrt(λ1) * u1, :, 1)
Y = [F * randn(rng, k, n[l]) + sqrt(v[l]) * randn(rng, d, n[l]) for l in 1:L]
@testset "i=$i, w=$w" for i in 1:3, w in [[1, 2], [2, 3]]
@test wpca(Y, i, w) == let w = w
Yw = reduce(hcat, sqrt(w[l]) * Y[l] for l in 1:L)
svd(Yw).U[:, i]
end
end
end
@testset "Weighted PCA: uniform weights" begin
rng = StableRNG(0)
# Data setup
c, v = [4, 8], [1, 3]
λ1 = 1
d = 20
n = c .* d
k = 1
L = length.((c, v)) |> only ∘ unique
# Generate data
u1 = normalize(randn(rng, d))
F = reshape(sqrt(λ1) * u1, :, 1)
Y = [F * randn(rng, k, n[l]) + sqrt(v[l]) * randn(rng, d, n[l]) for l in 1:L]
@testset "i=$i" for i in 1:3
@test wpca(Y, i, UniformWeights()) == svd(reduce(hcat, Y)).U[:, i]
end
end
@testset "Weighted PCA: inverse noise variance weights" begin
rng = StableRNG(0)
# Data setup
c, v = [4, 8], [1, 3]
λ1 = 1
d = 20
n = c .* d
k = 1
L = length.((c, v)) |> only ∘ unique
# Generate data
u1 = normalize(randn(rng, d))
F = reshape(sqrt(λ1) * u1, :, 1)
Y = [F * randn(rng, k, n[l]) + sqrt(v[l]) * randn(rng, d, n[l]) for l in 1:L]
@testset "i=$i" for i in 1:3
# true v
@test wpca(Y, i, InverseVarianceWeights(v=v)) == let v = v
w = inv.(v)
Yw = reduce(hcat, sqrt(w[l]) * Y[l] for l in 1:L)
svd(Yw).U[:, i]
end
# estimated v
@test wpca(Y, i, InverseVarianceWeights()) == let v = RefImp.vest(Y)
w = inv.(v)
Yw = reduce(hcat, sqrt(w[l]) * Y[l] for l in 1:L)
svd(Yw).U[:, i]
end
end
end
@testset "Weighted PCA: optimal weights" begin
rng = StableRNG(0)
# Data setup
c, v = [4, 8], [1, 3]
λ1 = 1.5
d = 20
n = c .* d
k = 1
L = length.((c, v)) |> only ∘ unique
# Generate data
u1 = normalize(randn(rng, d))
F = reshape(sqrt(λ1) * u1, :, 1)
Y = [F * randn(rng, k, n[l]) + sqrt(v[l]) * randn(rng, d, n[l]) for l in 1:L]
# true v, true λi
@test wpca(Y, 1, OptimalWeights(v=v, λ=λ1)) ≈ let v = v, λi = λ1
w = inv.(v .* (λi .+ v))
Yw = reduce(hcat, sqrt(w[l]) * Y[l] for l in 1:L)
svd(Yw).U[:, 1]
end
# true v, estimated λi
@test wpca(Y, 1, OptimalWeights(v=v)) ≈ let v = v, λi = RefImp.λest(Y; v=v)[1]
w = inv.(v .* (λi .+ v))
Yw = reduce(hcat, sqrt(w[l]) * Y[l] for l in 1:L)
svd(Yw).U[:, 1]
end
# estimated v, true λi
@test wpca(Y, 1, OptimalWeights(λ=λ1)) ≈ let v = RefImp.vest(Y), λi = λ1
w = inv.(v .* (λi .+ v))
Yw = reduce(hcat, sqrt(w[l]) * Y[l] for l in 1:L)
svd(Yw).U[:, 1]
end
# estimated v, estimated λi
@test wpca(Y, 1, OptimalWeights()) ≈ let v = RefImp.vest(Y), λi = RefImp.λest(Y)[1]
w = inv.(v .* (λi .+ v))
Yw = reduce(hcat, sqrt(w[l]) * Y[l] for l in 1:L)
svd(Yw).U[:, 1]
end
end
| WeightedPCA | https://github.com/dahong67/WeightedPCA.jl.git |
|
[
"MIT"
] | 0.1.0 | d00d2939bb39c5cbc6ef71bc2c594ead1a10a039 | docs | 1724 | # WeightedPCA: PCA for heterogeneous quality samples
[](https://dahong67.github.io/WeightedPCA.jl/stable/)
[](https://dahong67.github.io/WeightedPCA.jl/dev/)
[](https://www.repostatus.org/#wip)
[](https://JuliaCI.github.io/NanosoldierReports/pkgeval_badges/report.html)
[](https://github.com/dahong67/WeightedPCA.jl/actions/workflows/CI.yml?query=branch%3Amaster)
[](https://codecov.io/gh/dahong67/WeightedPCA.jl)
> 👋 *This package provides research code and work is ongoing.
> If you are interested in using it in your own research,
> **I'd love to hear from you and collaborate!**
> Feel free to write: [email protected]*
Please cite the following paper for this technique:
> David Hong, Fan Yang, Jeffrey A. Fessler, Laura Balzano.
> "Optimally Weighted PCA for High-Dimensional Heteroscedastic Data", 2022.
> https://arxiv.org/abs/1810.12862
In BibTeX form:
```bibtex
@Misc{hyfb2022owp,
title = "Optimally Weighted PCA for High-Dimensional Heteroscedastic Data",
author = "David Hong and Fan Yang and Jeffrey A. Fessler and Laura Balzano",
year = 2022,
url = "https://arxiv.org/abs/1810.12862",
}
```
| WeightedPCA | https://github.com/dahong67/WeightedPCA.jl.git |
|
[
"MIT"
] | 0.1.0 | d00d2939bb39c5cbc6ef71bc2c594ead1a10a039 | docs | 1099 | ```@meta
CurrentModule = WeightedPCA
```
# WeightedPCA: PCA for heterogeneous quality samples
Documentation for [WeightedPCA](https://github.com/dahong67/WeightedPCA.jl).
> 👋 *This package provides research code and work is ongoing.
> If you are interested in using it in your own research,
> **I'd love to hear from you and collaborate!**
> Feel free to write: [[email protected]](mailto:[email protected])*
Please cite the following paper for this technique:
> David Hong, Fan Yang, Jeffrey A. Fessler, Laura Balzano.
> "Optimally Weighted PCA for High-Dimensional Heteroscedastic Data", 2022.
> [https://arxiv.org/abs/1810.12862](https://arxiv.org/abs/1810.12862)
In BibTeX form:
```bibtex
@Misc{hyfb2022owp,
title = "Optimally Weighted PCA for High-Dimensional Heteroscedastic Data",
author = "David Hong and Fan Yang and Jeffrey A. Fessler and Laura Balzano",
year = 2022,
url = "https://arxiv.org/abs/1810.12862",
}
```
## Docstrings
```@index
```
```@docs
WeightedPCA
wpca
ComputedWeights
UniformWeights
InverseVarianceWeights
OptimalWeights
```
| WeightedPCA | https://github.com/dahong67/WeightedPCA.jl.git |
|
[
"MIT"
] | 0.1.2 | 5be88dcc6bb8b1f16e8d515f9abee642f52a79c0 | code | 444 | # using ObjectivePaths
# using Term: install_term_stacktrace
# install_term_stacktrace()
# f = path("/Users/federicoclaudi/Documents/Github/ObjectivePaths/src")
# f2 = path("/Users/federicoclaudi/Documents/Github/ObjectivePaths/nonexist")
# p = path("src/types.jl")
# # TODO readme
# # TODO release
# # f / p
# # f / "test.md"
# # f
# println("\n"^10)
# println(f)
# println(f2)
# println(p)
# info(f)
# using Pkg
# Pkg.add("Term")add
| ObjectivePaths | https://github.com/FedeClaudi/ObjectivePaths.jl.git |
|
[
"MIT"
] | 0.1.2 | 5be88dcc6bb8b1f16e8d515f9abee642f52a79c0 | code | 765 | using ObjectivePaths
using Documenter
DocMeta.setdocmeta!(
ObjectivePaths,
:DocTestSetup,
:(using ObjectivePaths);
recursive = true,
)
makedocs(;
modules = [ObjectivePaths],
authors = "FedeClaudi <[email protected]> and contributors",
repo = "https://github.com/FedeClaudi/ObjectivePaths.jl/blob/{commit}{path}#{line}",
sitename = "ObjectivePaths.jl",
format = Documenter.HTML(;
prettyurls = get(ENV, "CI", "false") == "true",
canonical = "https://FedeClaudi.github.io/ObjectivePaths.jl",
assets = String[],
collapselevel = 1,
),
strict = false,
pages = ["Home" => "index.md", "library" => "library.md"],
)
deploydocs(; repo = "github.com/FedeClaudi/ObjectivePaths.jl")
| ObjectivePaths | https://github.com/FedeClaudi/ObjectivePaths.jl.git |
|
[
"MIT"
] | 0.1.2 | 5be88dcc6bb8b1f16e8d515f9abee642f52a79c0 | code | 381 | module ObjectivePaths
using Term
using Term.Layout
using Term.Tables
import Term: Theme, set_theme
import Term.Trees: Tree
import OrderedCollections: OrderedDict
include("types.jl")
include("utils.jl")
include("operations.jl")
include("parts.jl")
const is_win = Sys.iswindows()
export Folder, File, path, info, tree
export name, exists, nfiles, base
export files, subdirs
end
| ObjectivePaths | https://github.com/FedeClaudi/ObjectivePaths.jl.git |
|
[
"MIT"
] | 0.1.2 | 5be88dcc6bb8b1f16e8d515f9abee642f52a79c0 | code | 740 |
# ---------------------------- paths manipulation ---------------------------- #
"""
-(path::AbstractPath, val::Int)
Move `val` levels up in the paths hierarchy.
"""
function Base.:-(path::AbstractPath, val::Int)
parts = splitpath(path.path)
length(parts) <= val && begin
@warn "Cannot go $(val) steps up from $(path.path)"
return nothing
end
return Folder(joinpath(parts[1:(length(parts) - val)]))
end
# ---------------------------------- joining --------------------------------- #
"""
/(p1::AbstractPath, p2::AbstractPath)
Concatenate paths.
"""
Base.:/(p1::AbstractPath, p2::AbstractPath) = path(joinpath(p1.path, p2.path))
Base.:/(p1::AbstractPath, p2::String) = path(joinpath(p1.path, p2))
| ObjectivePaths | https://github.com/FedeClaudi/ObjectivePaths.jl.git |
|
[
"MIT"
] | 0.1.2 | 5be88dcc6bb8b1f16e8d515f9abee642f52a79c0 | code | 839 | """
name(dir::AbstractPath)
Get name (last part) of a path.
"""
name(dir::AbstractPath) = splitpath(dir.path)[end]
name(dir::String) = splitpath(dir)[end]
"""
base(p::AbstractPath)
Get the base (dirname) of a path.
Similar to `dirname(p)` but returns a `Folder` object instead of a `String`.
"""
base(p::AbstractPath) = path(dirname(p))
"""
f::File)
Get the extension of a file.
Returns an empty string for a folder.
"""
extension(f::File) = splitext(f)[end]
extension(f::Folder) = ""
"""
parent(path::AbstractPath)
Get the folder one level up in the hierarchy of a path.
"""
Base.parent(path::AbstractPath) = path - 1
"""
split(path::AbstractPath)::Tuple{AbstractPath,String}
Split a path into its base and its name.
"""
Base.split(path::AbstractPath)::Tuple{AbstractPath,String} = base(path), name(path)
| ObjectivePaths | https://github.com/FedeClaudi/ObjectivePaths.jl.git |
|
[
"MIT"
] | 0.1.2 | 5be88dcc6bb8b1f16e8d515f9abee642f52a79c0 | code | 4971 | abstract type AbstractPath end
"""
Folder
Stores the path to a (possibly not-existing) folder
"""
struct Folder <: AbstractPath
path::String
end
"""
File
Stores the path to a (possibly not-existing) file
"""
struct File <: AbstractPath
path::String
end
"""
path(p::String)
Create a `Folder` or a `File` type from a string with a path.
"""
function path(p::String)
# if it doesn't point to an existing path, infer folder/file from name
if ispath(p)
return if isfile(p)
File(p)
else
Folder(p)
end
else
_name = splitpath(p)[end]
return if occursin(".", _name)
File(p)
else
Folder(p)
end
end
end
# ----------------------------------- repr ----------------------------------- #
title(f::Folder) = "📁 folder:"
title(f::File) = "📄 file:"
"""
repr_info
Create a compact textual representation of an AbstractPath
"""
function repr_info end
function repr_info(p::Folder)
_exists = exists(p) ? "{bold green}✔{/bold green}" : "{dim red}✖{/dim red}"
_nfiles = exists(p) ? "# files: {bright_blue}$(nfiles(p)){/bright_blue}" : ""
"{dim}exists: $_exists | $_nfiles{/dim}"
end
function repr_info(p::File)
_exists = exists(p) ? "{bold green}✔{/bold green}" : "{dim red}✖{/dim red}"
_size = get_file_format(filesize(p.path))
_size_info = exists(p) ? "# size: {bright_blue}$_size{/bright_blue}" : ""
"{dim}exists: $_exists | $_size_info{/dim}"
end
function Base.print(io::IO, p::AbstractPath)
path = RenderableText(
"{bright_blue}$(title(p)){/bright_blue} {italic}" *
highlight_path(p.path) *
"{italic}";
style = "bold white",
)
print(io, path / hLine(path; box = :HEAVY, style = "black") / repr_info(p))
end
Base.show(io::IO, ::MIME"text/plain", p::AbstractPath) = print(io, p)
# ------------------------------- Base methods ------------------------------- #
Base.splitpath(p::AbstractPath) = splitpath(p.path)
Base.isfile(f::Folder) = false
Base.isfile(f::File) = true
Base.ispath(p::AbstractPath) = ispath(p.path)
Base.isdir(f::Folder) = true
Base.isdir(f::File) = false
Base.dirname(p::AbstractPath) = dirname(p.path)
Base.splitext(p::AbstractPath) = splitext(p.path)
Base.mkdir(f::Folder; kwargs...) = mkdir(f.path; kwargs...)
Base.mkpath(f::Folder; kwargs...) = mkpath(f.path; kwargs...)
Base.cp(source::AbstractPath, dest::AbstractPath; kwargs...) =
cp(source.path, dest.path; kwargs...)
Base.cp(source::String, dest::AbstractPath; kwargs...) = cp(source, dest.path; kwargs...)
Base.cp(source::AbstractPath, dest::String; kwargs...) = cp(source.path, dest; kwargs...)
Base.mv(source::AbstractPath, dest::AbstractPath; kwargs...) =
mv(source.path, dest.path; kwargs...)
Base.mv(source::String, dest::AbstractPath; kwargs...) = mv(source, dest.path; kwargs...)
Base.mv(source::AbstractPath, dest::String; kwargs...) = mv(source.path, dest; kwargs...)
Base.rm(p::AbstractPath; kwargs...) = rm(p.path; kwargs...)
Base.readdir(f::Folder; kwargs...) = readdir(f.path; kwargs...)
# ---------------------------------------------------------------------------- #
# info #
# ---------------------------------------------------------------------------- #
"""
info(f::Folder)
Create a detailed visualization of a folder's content
"""
function info(f::Folder)
path =
RenderableText(title(f) * highlight_path(f.path) * "{italic}"; style = "bold white")
path /= hLine(path; box = :HEAVY, style = "black")
if !exists(f)
content = Panel(
rvstack(path, "{dim} does not exist yet{/dim}", pad = 1);
padding = (5, 5, 1, 1),
style = "dim blue",
)
else
# ----------------------------------- tree ----------------------------------- #
folder_tree = "" / tree(f.path)
# ----------------------------------- table ---------------------------------- #
properties = ["exists", "# files"]
values = [exists(f) ? "{green}yes{/green}" : "{red}no{/red}", nfiles(f)]
tb = Panel(
Table(
OrderedDict("" => properties, :value => string.(values)),
columns_justify = [:right, :center],
header_justify = [:right, :center],
header_style = "#f59d5b",
columns_style = ["#f2c56b", "bold"],
box = :SIMPLE_HEAD,
);
box = :SQUARE,
style = "dim",
title = "properties",
title_style = "default bright_blue",
fit = false,
width = 30,
)
content = Panel(
cvstack(path, (folder_tree * Spacer(folder_tree.measure.h, 5) * tb); pad = 1);
padding = (5, 5, 1, 1),
style = "dim blue",
fit = true,
)
end
print(content)
end
| ObjectivePaths | https://github.com/FedeClaudi/ObjectivePaths.jl.git |
|
[
"MIT"
] | 0.1.2 | 5be88dcc6bb8b1f16e8d515f9abee642f52a79c0 | code | 3803 | # ---------------------------------------------------------------------------- #
# on paths #
# ---------------------------------------------------------------------------- #
"""
exists(path::AbstractPath)
Check if an object exists at the target path
"""
exists(path::AbstractPath) = ispath(path.path)
exists(path::String) = ispath(path)
"""
nfiles(f::Folder)
Get the number of files in a folder
"""
nfiles(f::Folder) = exists(f) ? length(readdir(f.path)) : nothing
nfiles(f::String) = isdir(f) ? length(readdir(f)) : nothing
nfiles(f::File) = nothing
# ---------------------------------------------------------------------------- #
# contents #
# ---------------------------------------------------------------------------- #
"""
files(f::Folder)::Vector{File}
Get all files in a folder (without recursion)
"""
files(f::Folder)::Vector{File} =
exists(f) ? path.(filter(isfile, readdir(f; join = true))) : []
"""
subdirs(f::Folder)::Vector{Folder}
Get all subfolders in a folder (without recursion)
"""
subdirs(f::Folder)::Vector{Folder} =
exists(f) ? path.(filter(isdir, readdir(f; join = true))) : []
# ---------------------------------------------------------------------------- #
# visuals #
# ---------------------------------------------------------------------------- #
"""
highlight_path(path::String)
Add Term's markup syntax to highlights parts of a path.
"""
function highlight_path(path::String)
parts = splitpath(path)
parts[end] = "{bold white}$(parts[end]){/bold white}"
join(parts, "{bright_blue} > {/bright_blue}")
end
"""
get_file_format(nbytes; suffix="B")
Return a string with formatted file size.
"""
function get_file_format(nbytes::Int; suffix = "B")
for unit in ("", "K", "M", "G", "T", "P", "E", "Z", "Y")
nbytes < 1024 && return string(round(nbytes; digits = 2), ' ', unit, suffix)
nbytes = nbytes / 1024
end
end
# ---------------------------------------------------------------------------- #
# FOLDERS TREE #
# ---------------------------------------------------------------------------- #
op_theme = Theme(
tree_max_leaf_width = 240,
tree_mid = "dim #6488f5",
tree_terminator = "dim #6488f5",
tree_skip = "dim #6488f5",
tree_dash = "dim #6488f5",
tree_trunc = "dim #6488f5",
tree_pair = "bold blue",
tree_keys = "bold blue",
tree_title = "bold white",
)
set_theme(op_theme)
"""
_tree(dir::String)::OrderedDict
Construct a dictionary storing files/subsfolders hierarchy form a directory.
Calls itself recursively to handle subfolders.
"""
function _tree(dir::String)::OrderedDict
tree_data = OrderedDict{String,Any}("files" => [])
for item in readdir(dir)
startswith(item, '.') && continue
path = joinpath(dir, item)
if isdir(path)
tree_data["📁 " * item] = _tree(path)
else
item, ext = splitext(item)
ext = "{bold dim}$ext{bold dim}"
length(tree_data["files"]) < 20 &&
push!(tree_data["files"], nothing => "{white}$(item)$(ext){/white}")
length(tree_data["files"]) == 20 && push!(
tree_data["files"],
nothing => "{white} ... files omitted ... {/white}",
)
end
end
tree_data
end
"""
tree(dir::String)::Tree
Construct a term Tree with a folder's content.
"""
function tree(dir::String)::Tree
return Tree(_tree(dir); title = "{bright_blue}$(name(dir)){bright_blue}")
end
tree(f::Folder) = tree(f.path)
| ObjectivePaths | https://github.com/FedeClaudi/ObjectivePaths.jl.git |
|
[
"MIT"
] | 0.1.2 | 5be88dcc6bb8b1f16e8d515f9abee642f52a79c0 | code | 962 | using ObjectivePaths
using Test
@testset "Folder" begin
fld = pwd()
@info "Test - folder: $fld"
fold = path(fld)
@test typeof(fold) == Folder
@test exists(fold) == true
@test name(fold) == splitpath(fld)[end]
@test parent(fold) == base(fold)
@test parent(fold) == fold - 1
# test a non existing folder
ne = fold / "asafisufhsndfnssfnais"
@test ne isa Folder
@test exists(ne) == false
end
@testset "Folder utils" begin
fld = path(pwd())
fls = files(fld)
@test typeof(fls) == Vector{File}
@test nfiles(fld) == length(fls)
subs = subdirs(fld - 1)
@test typeof(subs) == Vector{Folder}
end
@testset "Folder info_print" begin
fld = path(pwd())
tree(fld)
info(fld)
print(fld)
end
@testset "File" begin
fl = files(path(pwd()))[1]
@test typeof(fl) == File
@test exists(fl) == true
end
@testset "File print" begin
fl = files(path(pwd()))[1]
print(fl)
end
| ObjectivePaths | https://github.com/FedeClaudi/ObjectivePaths.jl.git |
|
[
"MIT"
] | 0.1.2 | 5be88dcc6bb8b1f16e8d515f9abee642f52a79c0 | docs | 1033 | # ObjectivePaths
[](https://codecov.io/gh/FedeClaudi/ObjectivePaths.jl)
[](https://github.com/FedeClaudi/ObjectivePaths.jl/actions/workflows/CI.yml)
[](https://FedeClaudi.github.io/ObjectivePaths.jl/stable)
# ObjectivePaths
ObjectivePaths is a small Julia library aiming to make a few operations around handling paths to folders and files easier. It's a small wrapper around Base's [file system](https://docs.julialang.org/en/v1/base/file/) applying some ideas from Python's [pathlib](https://docs.python.org/3/library/pathlib.html) library, in a Julian way.
Installation:
```Julia
] add ObjectivePaths
```
See documentation for more details.
Also, please consider supporting my work if you find it valuable!
[](https://ko-fi.com/C0C5E36Z2)
| ObjectivePaths | https://github.com/FedeClaudi/ObjectivePaths.jl.git |
|
[
"MIT"
] | 0.1.2 | 5be88dcc6bb8b1f16e8d515f9abee642f52a79c0 | docs | 3268 | ```@meta
CurrentModule = ObjectivePaths
```
# ObjectivePaths
ObjectivePaths is a small Julia library aiming to make a few operations around handling paths to folders and files easier. It's a small wrapper around Base's [file system](https://docs.julialang.org/en/v1/base/file/) applying some ideas from Python's [pathlib](https://docs.python.org/3/library/pathlib.html) library, in a Julian way.
Installation:
```Julia
] add ObjectivePaths
```
## AbstractPath, Folder & File
The first thing you need is to craete pointers to paths (folders/files). This is done by calling the `path` function on a string with a filepath:
``` @example op
using ObjectivePaths
current_folder = pwd() # path to current folder
path(current_folder) # create a Folder type
```
(note: the display in the REPR will look a bit different in your terminal - give it a go by copy-pasting the code above and running it in the REPL).
As you can see, calling `path` on a string pointing to a file create a `Folder` type. If, instead, you are using a file:
``` @example op
parent_content = readdir(parent(path(current_folder)); join=true) # get content of parent folder
files_paths = filter(isfile, parent_content) # get only files
path(files_paths[1]) # pointer to a file
```
this creates a `File`. These are the two subtypes of `AbstractPath`. There's a few methods for `AbstractPath`s which can make your life easier, starting from printing nicely formatted info as shown above. But you can do more
``` @example op
fld = path(current_folder) # Folder object
println(fld)
exists(fld) |> println # true if folder exists
nfiles(fld) |> println # number of files in folder
name(fld) |> println # name of folder (last part of the path)
# also mose Base methods are available for AbstractPaths
split(fld) |> println # split path into base/name
```
### Folder-specific methods
With `Folder` objects you can do a few more, starting from viewing more info (or use `tree` to just print out the folder structure):
``` @example op
info(fld - 1) # -1 moves us one level up the hierarchy
```
Or get the files/subfolders in your folder
``` @example op
println("Subfolders in folder:")
subdirs(fld-1) |> print
```
```@example op
println("Files in folder's parent:")
files(parent(fld)) |> print # parent goes up one level
```
## Manipulating paths
One of the things that can be a bit annoying is manipulating paths. Normally, you'd create a `String` with the path you need, or you combine things like `splitpath` and `joinpath` to create a path. Not fun.
We can make things a bit easier.
``` @example op
# say you want to get access to a folder 3 levels up the current one
fld - 3 # done
```
Yep, that's it.
But what if you want to create a new folder in there? Need to split, join paths or something? Nope
``` @example op
newfld = (fld - 3) / "new_folder_that_doesnt_exist_yet" # point to new folder
```
and then you can use `mkdir` or `mkpath` as you would normally, neat.
## Coda
That's it for now. But if you have issues, or questions or ideas for new improvements, get in touch on Github!
Also, please consider supporting my work if you find it valuable!
[](https://ko-fi.com/C0C5E36Z2) | ObjectivePaths | https://github.com/FedeClaudi/ObjectivePaths.jl.git |
|
[
"MIT"
] | 0.1.2 | 5be88dcc6bb8b1f16e8d515f9abee642f52a79c0 | docs | 186 | # Library
List of all types and methods in the library
```@meta
CurrentModule = ObjectivePaths
```
```@index
Pages = ["library.md"]
```
```@autodocs
Modules = [ObjectivePaths]
``` | ObjectivePaths | https://github.com/FedeClaudi/ObjectivePaths.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 997 | using BallArithmetic
using Documenter
using DocumenterCitations
bib = CitationBibliography(
joinpath(@__DIR__, "src", "refs.bib");
style = :numeric
)
DocMeta.setdocmeta!(BallArithmetic, :DocTestSetup, :(using BallArithmetic);
recursive = true)
makedocs(;
plugins = [bib],
modules = [BallArithmetic],
authors = "Luca Ferranti, Isaia Nisoli",
repo = Documenter.Remotes.GitHub("JuliaBallArithmetic", "BallArithmetic.jl"),
sitename = "BallArithmetic.jl",
format = Documenter.HTML(;
prettyurls = get(ENV, "CI", "false") == "true",
canonical = "https://juliaballarithmetic.github.io/BallArithmetic.jl/",
edit_link = "main",
assets = String["assets/citations.css"]),
pages = [
"Home" => "index.md",
"API" => "API.md",
"Eigenvalues" => "eigenvalues.md",
"References" => "references.md"
])
deploydocs(;
repo = "github.com/JuliaBallArithmetic/BallArithmetic.jl.git",
devbranch = "main")
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 1919 | module IntervalArithmeticExt
using BallArithmetic
import IntervalArithmetic
"""
Convert an Interval from IntervalArithmetic to a Ball
"""
function BallArithmetic.Ball(x::IntervalArithmetic.Interval{Float64})
c, r = IntervalArithmetic.mid(x), IntervalArithmetic.radius(x)
return Ball(c, r)
end
"""
Convert an Complex{Interval} from IntervalArithmetic to a Ball
"""
function BallArithmetic.Ball(x::Complex{IntervalArithmetic.Interval{Float64}})
r_mid, r_rad = IntervalArithmetic.mid(real(x)), IntervalArithmetic.radius(real(x))
i_mid, i_rad = IntervalArithmetic.mid(imag(x)), IntervalArithmetic.radius(imag(x))
rad = setrounding(Float64, RoundUp) do
return sqrt(r_rad^2 + i_rad^2)
end
return Ball(r_mid + im * i_mid, rad)
end
"""
Construct a BallMatrix from a matrix of Interval{Float64}
"""
function BallArithmetic.BallMatrix(x::Matrix{IntervalArithmetic.Interval{Float64}})
C, R = IntervalArithmetic.mid.(x), IntervalArithmetic.radius.(x)
return BallMatrix(C, R)
end
"""
Construct a BallMatrix from a matrix of Complex{Interval{Float64}}, remark
that the radius may be bigger, to ensure mathematical consistency, i.e.,
we need to find a ball that contains a rectangle
"""
function BallArithmetic.BallMatrix(x::Matrix{Complex{IntervalArithmetic.Interval{Float64}}})
R_mid, R_rad = IntervalArithmetic.mid.(real.(x)), IntervalArithmetic.radius.(real.(x))
I_mid, I_rad = IntervalArithmetic.mid.(imag.(x)), IntervalArithmetic.radius.(imag.(x))
Rad = setrounding(Float64, RoundUp) do
return sqrt.(R_rad .^ 2 + I_rad .^ 2)
end
return BallMatrix(R_mid + im * I_mid, Rad)
end
function IntervalArithmetic.interval(x::Ball{Float64, Float64})
up = setrounding(Float64, RoundUp) do
return x.c + x.r
end
down = setrounding(Float64, RoundUp) do
return x.c - x.r
end
return IntervalArithmetic.interval(down, up)
end
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 573 | import Pkg;
Pkg.activate("../");
using BallArithmetic
import Pkg;
Pkg.add("RigorousInvariantMeasures");
using RigorousInvariantMeasures
@time begin
B = Fourier1D(128)
T(x) = 3.3 * x * (1 - x)
NK = RigorousInvariantMeasures.GaussianNoise(B, 0.5)
P = assemble(B, T)
import IntervalArithmetic
midI = IntervalArithmetic.mid
Pfloat = midI.(real.(P)) + im * midI.(imag.(P))
Q = NK.NK * Pfloat
enc = BallArithmetic.compute_enclosure(BallMatrix(Q), 0.5, 0.9, 10^-10)
end
Pkg.add("JLD")
using JLD
save("enc33.jld", "Q", Q, "enc", enc)
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 2177 | module BallArithmetic
include("numerical_test/multithread.jl")
using LinearAlgebra
if Sys.ARCH == :x86_64
using OpenBLASConsistentFPCSR_jll
else
@warn "The behaviour of multithreaded OpenBlas on this architecture is unclear,
we will fallback to single threaded OpenBLAS
We refer to
https://www.tuhh.de/ti3/rump/intlab/Octave/INTLAB_for_GNU_Octave.shtml
"
end
function __init__()
if Sys.ARCH == :x86_64
@info "Switching to OpenBLAS with ConsistentFPCSR = 1 flag enabled, guarantees
correct floating point rounding mode over all threads."
BLAS.lbt_forward(OpenBLASConsistentFPCSR_jll.libopenblas_path; verbose = true)
N = BLAS.get_num_threads()
K = 1024
if NumericalTest.rounding_test(N, K)
@info "OpenBLAS is giving correct rounding on a ($K,$K) test matrix on $N threads"
else
@warn "OpenBLAS is not rounding correctly on the test matrix"
@warn "The number of BLAS threads was set to 1 to ensure rounding mode is consistent"
if !NumericalTest.rounding_test(1, K)
@warn "The rounding test failed on 1 thread"
end
end
else
BLAS.set_num_threads(1)
@warn "The number of BLAS threads was set to 1 to ensure rounding mode is consistent"
if !NumericalTest.rounding_test(1, 1024)
@warn "The rounding test failed on 1 thread"
end
end
end
using RoundingEmulator, MacroTools, SetRounding
export ±, mid, rad
include("rounding/rounding.jl")
include("types/ball.jl")
export Ball, BallF64, BallComplexF64
include("types/matrix.jl")
export BallMatrix
include("types/vector.jl")
export BallVector
include("types/array.jl")
include("types/convertpromote.jl")
include("norm_bounds/rigorous_norm.jl")
export upper_bound_norm
include("norm_bounds/rigorous_opnorm_bounds.jl")
export upper_bound_L1_opnorm, upper_bound_L2_opnorm, upper_bound_L_inf_opnorm
include("eigenvalues/gev.jl")
include("eigenvalues/upper_bound_spectral.jl")
include("svd/svd.jl")
include("pseudospectra/rigorous_contour.jl")
include("matrix_classifiers/is_M_matrix.jl")
include("fft/fft.jl")
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 2285 |
# Implementing Theorem 2 Miyajima
# Numerical enclosure for each eigenvalue in generalized eigenvalue problem
"""
gevbox(A::BallMatrix{T}, B::BallMatrix{T})
Compute rigorous enclosure of each eigenvalue in generalized eigenvalue problem
following Ref. [Miyajima2012](@cite)
# References
* [Miyajima2012](@cite) Miyajima, JCAM 246, 9 (2012)
"""
function gevbox(A::BallMatrix{T}, B::BallMatrix{T}) where {T}
gev = eigen(A.c, B.c)
return _certify_gev(A, B, gev)
end
function _certify_gev(A::BallMatrix{T}, B::BallMatrix{T}, gev::GeneralizedEigen) where {T}
X = gev.vectors
Y = inv(B.c * X)
bX = BallMatrix(X)
bY = BallMatrix(Y)
S = bY * B * bX - I
normS = upper_bound_L_inf_opnorm(S)
@debug "norm S" normS
@assert normS<1 "It is not possible to verify the eigenvalues with this precision"
bD = BallMatrix(Diagonal(gev.values))
R = bY * (A * bX - B * bX * bD)
normR = upper_bound_L_inf_opnorm(R)
@debug "norm R" normR
den_up = @down (1.0 - normS)
eps = @up normR / den_up
return [Ball(lam, eps) for lam in gev.values]
end
"""
evbox(A::BallMatrix{T})
Compute rigorous enclosure of each eigenvalue following Ref. [Miyajima2012](@cite)
TODO: Using Miyajima's algorithm is overkill, may be worth using
# References
* [Miyajima2012](@cite) Miyajima, JCAM 246, 9 (2012)
"""
function evbox(A::BallMatrix{T}) where {T}
gev = eigen(A.c)
return _certify_evbox(A, gev)
end
function _certify_evbox(A::BallMatrix{T}, gev::Eigen) where {T}
X = gev.vectors
Y = inv(X)
bX = BallMatrix(X)
bY = BallMatrix(Y)
S = bY * bX - I
normS = upper_bound_L_inf_opnorm(S)
@debug "norm S" normS
@assert normS<1 "It is not possible to verify the eigenvalues with this precision",
normS,
norm(X, 2),
norm(Y, 2)
bD = BallMatrix(Diagonal(gev.values))
# probably something better can be done here
# since this is not GEV, but only EV
# need to look better at Miyajima
# https://www.sciencedirect.com/science/article/pii/S037704270900795X
R = bY * (A * bX - bX * bD)
normR = upper_bound_L_inf_opnorm(R)
@debug "norm R" normR
den_up = @down (1.0 - normS)
eps = @up normR / den_up
return [Ball(lam, eps) for lam in gev.values]
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 705 | """
collatz_upper_bound
Computes an upper bound for the spectral radius of a matrix A, by
using the Collatz-Wielandt theorem on |A|.
This idea comes from Ref. [Rump2011](@cite)
# References
* [Rump2011](@cite) Rump S., BIT 51, 2 (2011)
"""
function collatz_upper_bound(A::BallMatrix{T}; iterates = 10) where {T}
m, k = size(A)
x_old = ones(m)
x_new = x_old
absA = upper_abs(A)
#@info opnorm(absA, Inf)
# using Collatz theorem
lam = setrounding(T, RoundUp) do
for _ in 1:iterates
x_old = x_new
x_new = absA * x_old
#@info maximum(x_new ./ x_old)
end
lam = maximum(x_new ./ x_old)
end
return lam
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 2105 | import AbstractFFTs
import FFTW
"""
fft
Computes the FFT of a BallMatrix using the a priori error bound in
Ref. [Higham1996](@cite)
# References
* [Higham1996](@cite) Higham, Siam (1996)
"""
function fft(A::BallMatrix{T}, dims = (1, 2)) where {T}
if all([!ispow2(size(A.c)[i]) for i in dims])
@warn "The rigorous error estimate works for power of two sizes"
end
FFTAc = FFTW.fft(A.c, dims)
N = prod([size(A.c)[i] for i in dims])
#norms = [upper_bound_norm(x, r) for (x, r) in zip(eachslice(A.c; dims), eachslice(A.r; dims))]
norms_c = setrounding(T, RoundUp) do
return [LinearAlgebra.norm(v) for v in eachslice(A.c; dims)]
end
norms_r = setrounding(T, RoundUp) do
return [LinearAlgebra.norm(v) for v in eachslice(A.r; dims)]
end
μ = eps(eltype(A.c))
u = eps(eltype(A.c))
γ4 = @up 4.0 * u / (1.0 - 4.0 * u)
η = @up μ + γ4 * (sqrt_up(2.0) + μ)
l = @up log2(N) / sqrt_down(T(N))
err = @up l .* (η / (1.0 - η) .* norms_c .+ norms_r)
err_M = repeat(err, outer = size(A.c))
#err_M = vcat([err for _ in 1:N]...)
#@info err_M
#reshape(err_M, size(A.r))
return BallMatrix(FFTAc, err_M)
end
"""
fft
Computes the FFT of a BallVector using the a priori error bound in
Ref. [Higham1996](@cite)
# References
* [Higham1996](@cite) Higham, Siam (1996)
"""
function fft(v::BallVector{T}) where {T}
if !ispow2(length(v))
@warn "The rigorous error estimate works for power of two sizes"
end
FFTAc = FFTW.fft(v.c)
N = length(v)
#norms = [upper_bound_norm(x, r) for (x, r) in zip(eachslice(A.c; dims), eachslice(A.r; dims))]
norms_c = setrounding(T, RoundUp) do
return norm(v.c)
end
norms_r = setrounding(T, RoundUp) do
return norm(v.r)
end
μ = eps(eltype(v.c))
u = eps(eltype(v.c))
γ4 = @up 4.0 * u / (1.0 - 4.0 * u)
η = @up μ + γ4 * (sqrt_up(2.0) + μ)
l = @up log2(N) / sqrt_down(T(N))
err = @up l * (η / (1.0 - η) * norms_c + norms_r)
err_M = fill(err, N)
return BallVector(FFTAc, err_M)
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 893 | """
Returns a matrix containing the off-diagonal elements
"""
function off_diagonal_abs(A::BallMatrix)
B = deepcopy(upper_abs(A))
for i in diagind(B)
B[i] = 0.0
end
return BallMatrix(B)
end
"""
Computes a vector containing lower bounds for the diagonal elements of |A|
"""
function diagonal_abs_lower_bound(A::BallMatrix{T}) where {T}
v = setrounding(T, RoundDown) do
abs.(diag(A.c)) - abs.(diag(A.r))
end
return v
end
using LinearAlgebra: diag
"""
Rigorous computer assisted proof of the fact that a matrix is an
[M-matrix](https://en.wikipedia.org/wiki/M-matrix)
"""
function is_M_matrix(A::BallMatrix)
B = off_diagonal_abs(A)
v = diagonal_abs_lower_bound(A)
if all(v .> 0.0) && iszero(B.c)
return true
end
ρ = collatz_upper_bound(BallMatrix(B))
if all(v .> ρ)
return true
end
return false
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 696 | import LinearAlgebra
function _upper_bound_norm(center, radius, p::Real = 2)
T = eltype(center)
norm = setrounding(T, RoundUp) do
return LinearAlgebra.norm(center, p) + LinearAlgebra.norm(radius, p)
end
return norm
end
"""
upper_bound_norm(A::BallMatrix, p::Real = 2)
Compute a rigorous upper bound for the Frobenius p-norm of a BallMatrix
"""
function upper_bound_norm(A::BallMatrix, p::Real = 2)
return _upper_bound_norm(A.c, A.r, p)
end
"""
upper_bound_norm(v::BallVector, p::Real = 2)
Compute a rigorous upper bound for the p-norm of a BallVector
"""
function upper_bound_norm(v::BallVector, p::Real = 2)
return _upper_bound_norm(v.c, v.r, p)
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 3453 | export collatz_upper_bound_L2_opnorm,
upper_bound_L1_opnorm, upper_bound_L_inf_opnorm, upper_bound_L2_opnorm
"""
upper_abs(A)
Return a floating point matrix `B` whose entries are bigger
or equal (componentwise) any of the entries of `A`
"""
function upper_abs(A::BallMatrix{T}) where {T}
absA = setrounding(T, RoundUp) do
return abs.(A.c) + A.r
end
return absA
end
"""
collatz_upper_bound_L2_opnorm(A::BallMatrix; iterates=10)
Give a rigorous upper bound on the ℓ² norm of the matrix `A`
by using the Collatz theorem.
We use Perron theory here: if for two matrices with `B` positive
`|A| < B` we have ρ(A)<=ρ(B) by Wielandt's theorem
[Wielandt's theorem](https://mathworld.wolfram.com/WielandtsTheorem.html)
The keyword argument `iterates` is used to establish how many
times we are iterating the vector of ones before we use Collatz's
estimate.
"""
function collatz_upper_bound_L2_opnorm(A::BallMatrix{T}; iterates = 10) where {T}
m, k = size(A)
x_old = ones(m)
x_new = x_old
absA = upper_abs(A)
#@info opnorm(absA, Inf)
# using Collatz theorem
lam = setrounding(T, RoundUp) do
for _ in 1:iterates
x_old = x_new
x_new = absA' * absA * x_old
#@info maximum(x_new ./ x_old)
end
return maximum(x_new ./ x_old)
end
return sqrt_up(lam)
end
using LinearAlgebra
"""
upper_bound_L1_opnorm(A::BallMatrix{T})
Returns a rigorous upper bound on the ℓ¹-norm of the ball matrix `A`
"""
function upper_bound_L1_opnorm(A::BallMatrix{T}) where {T}
norm = setrounding(T, RoundUp) do
return opnorm(A.c, 1) + opnorm(A.r, 1)
end
return norm
end
"""
upper_bound_L_inf_opnorm(A::BallMatrix{T})
Returns a rigorous upper bound on the ℓ-∞-norm of the ball matrix `A`
"""
function upper_bound_L_inf_opnorm(A::BallMatrix{T}) where {T}
norm = setrounding(T, RoundUp) do
return opnorm(A.c, Inf) + opnorm(A.r, Inf)
end
return norm
end
"""
upper_bound_L_inf_opnorm(A::BallMatrix{T})
Returns a rigorous upper bound on the ℓ²-norm of the ball matrix `A`
using the best between the Collatz bound and the interpolation bound
"""
function upper_bound_L2_opnorm(A::BallMatrix{T}) where {T}
norm1 = upper_bound_L1_opnorm(A)
norminf = upper_bound_L_inf_opnorm(A)
norm_prod = @up norm1 * norminf
return min(collatz_upper_bound_L2_opnorm(A), sqrt_up(norm_prod))
end
"""
svd_bound_L2_opnorm(A::BallMatrix{T})
Returns a rigorous upper bound on the ℓ²-norm of the ball matrix `A`
using the rigorous enclosure for the singular values implemented in
svd/svd.jl
"""
function svd_bound_L2_opnorm(A::BallMatrix{T}) where {T}
σ = svdbox(A)
top = σ[1]
return @up top.c + top.r
end
"""
svd_bound_L2_opnorm_inverse(A::BallMatrix)
Returns a rigorous upper bound on the ℓ²-norm of the inverse of the
ball matrix `A` using the rigorous enclosure for the singular values
implemented in svd/svd.jl
"""
function svd_bound_L2_opnorm_inverse(A::BallMatrix)
σ = svdbox(A)
if in(0, σ[end])
return +Inf
end
inv_inf = Ball(1.0) / σ[end]
return @up inv_inf.c + inv_inf.r
end
using LinearAlgebra
"""
svd_bound_L2_resolvent(A::BallMatrix, lam::Ball)
Returns a rigorous upper bound on the ℓ²-norm of the resolvent
of `A` at `λ`, i.e., ||(A-λ)^{-1}||_{ℓ²}
"""
svd_bound_L2_resolvent(A::BallMatrix, λ::Ball) = svd_bound_L2_opnorm_inverse(A - λ * I)
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 661 | module NumericalTest
export rounding_test
function _test_matrix(k)
A = zeros(Float64, (k, k))
A[:, end] = fill(2^(-53), k)
for i in 1:(k - 1)
A[i, i] = 1.0
end
return A
end
using LinearAlgebra
"""
rounding_test(n, k)
Let `u=fill(2^(-53), k-1)` and let A be the matrix
[I u;
0 2^(-53)]
This test checks the result of A*A' in different rounding modes,
running BLAS on `n` threads
"""
function rounding_test(n, k)
BLAS.set_num_threads(n)
A = _test_matrix(k)
B = setrounding(Float64, RoundUp) do
BLAS.gemm('N', 'T', 1.0, A, A)
end
return all([B[i, i] == nextfloat(1.0) for i in 1:(k - 1)])
end
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 11637 | export Enclosure, bound_enclosure
struct Enclosure
λ::Any
points::Vector{ComplexF64}
bounds::Vector{Ball{Float64, Float64}}
radiuses::Vector{Float64}
loop_closure::Bool
end
function _compute_exclusion_circle(T, λ, r; max_steps, rel_steps)
return _compute_exclusion_set(T, r; max_steps, rel_steps, λ)
end
function _compute_exclusion_circle_level_set_ode(T,
λ,
ϵ;
max_steps,
rel_steps,
max_initial_newton)
z = λ + ϵ
@info z
for j in 1:max_initial_newton
@info "Newton step $j"
K = svd(T - z * I)
z, σ = _newton_step(z, K, ϵ)
@info σ
if (σ - ϵ) / ϵ < 1 / 256
break
end
end
r = abs(λ - z)
return _compute_exclusion_circle(T, λ, r; max_steps, rel_steps)
end
"""
_compute_exclusion_circle_level_set_priori(T,
λ,
ϵ;
rel_pearl_size,
max_initial_newton)
This method bounds the resolvent on a circle centered at `λ`
that intersects in at least one point `z0` the `ϵ` level set.
This intersection is found by a Newton step, and fixes the radius
of the circle,
The value of `rel_pearl_size` gives us the relative radius of
the pearls with respect to the radius of the circle
Some rule of thumbs for the number of SVD computations:
if rel_pearl_size is 1/32, we are going to compute and certify 160 svds,
if rel_pearl_size is 1/64 we are going to compute and certify 320 svds.
In other words, the time of the computation scales linearly with the quality
of the pearl necklace
"""
function _compute_exclusion_circle_level_set_priori(T,
λ,
ϵ;
rel_pearl_size,
max_initial_newton)
out_z = []
out_bound = []
out_radiuses = []
z = λ + ϵ
for j in 1:max_initial_newton
@info "Newton step $j"
K = svd(T - z * I)
z, σ = _newton_step(z, K, ϵ)
@info σ
if (σ - ϵ) / ϵ < 1 / 256
break
end
end
r = abs(λ - z)
pearl_radius = r * rel_pearl_size
@info "pearl radius" pearl_radius
dist_points = (pearl_radius * 8) / 5
@info "distance between points" dist_points
# this N bounds from above 2π/dist_points , i.e., the number of equispaced
# points on the circumference
N = ceil(8 * r / dist_points)
# @info N
# for j in 0:(N - 1)
# z = λ + r * exp(2 * π * im * j / N)
# push!(out_z, z)
# K = svd(T - z * I)
# z_ball = Ball(z, pearl_radius)
# bound = _certify_svd(BallMatrix(T) - z_ball * I, K)[end]
# push!(out_bound, bound)
# push!(out_radiuses, pearl_radius)
# end
#return Enclosure(λ, out_z, out_bound, out_radiuses, true)
return _certify_circle(T, λ, r, N)
end
function _certify_circle(T, λ, r, N)
out_z = []
out_bound = []
out_radiuses = []
pearl_radius = 5 * (r * 2 * π / N) / 8
for j in 0:(N - 1)
z = λ + r * exp(2 * π * im * j / N)
push!(out_z, z)
K = svd(T - z * I)
z_ball = Ball(z, pearl_radius)
bound = _certify_svd(BallMatrix(T) - z_ball * I, K)[end]
push!(out_bound, bound)
push!(out_radiuses, pearl_radius)
end
return Enclosure(λ, out_z, out_bound, out_radiuses, true)
end
function _compute_exclusion_set(T, r; max_steps, rel_steps, λ = 0 + im * 0)
eigvals = diag(T)
out_z = []
out_bound = []
out_radiuses = []
loop_closure = false
z = λ + r
z0 = z
r_guaranteed_1 = 0.0
r_guaranteed = r_guaranteed_1
for t_step in 1:max_steps
z_old = z
r_old = r_guaranteed
K = svd(T - z * I)
τ = minimum(abs.(eigvals .- z)) / rel_steps
z = z + τ * im * (z - λ) / abs(z - λ)
z = z - (abs(z - λ)^2 - r^2) / conj(z - λ)
push!(out_z, z)
r_guaranteed = 5 * abs(z_old - z) / 8
if t_step == 1
r_guaranteed_1 = r_guaranteed
end
if t_step > 1
@assert r_guaranteed + r_old > abs(z_old - z)
end
# we certify in a ball around z_old
z_ball = Ball(z_old, r_guaranteed)
bound = _certify_svd(BallMatrix(T) - z_ball * I, K)[end]
push!(out_bound, bound)
push!(out_radiuses, r_guaranteed)
#print("test")
#@info "test"
#@info "r_guarantee", r_guaranteed
#@info "r_guarantee_1", r_guaranteed_1
#@info "dist to start", abs(z_old-z0)
loop_closure = abs(z_old - z0) < r_guaranteed + r_guaranteed_1
if t_step > 10 && loop_closure
@info t_step, "Loop closure"
break
end
end
return Enclosure(λ, out_z, out_bound, out_radiuses, loop_closure)
end
function bound_resolvent(E::Enclosure)
min_sing_val = minimum([@down x.c - x.r for x in E.bounds])
return @up 1.0 / min_sing_val
end
function check_enclosure(E::Enclosure)
check_overlap = true
for i in 1:(length(E.points) - 1)
check_overlap = abs(E.points[i + 1] - E.points[i]) <
E.radiuses[i] + E.radiuses[i + 1]
if check_overlap == false
return false
end
end
check_overlap = abs(E.points[1] - E.points[end]) < E.radiuses[1] + E.radiuses[end]
return check_overlap
end
function check_loop(λ, E::Enclosure)
end
function _follow_level_set(z::ComplexF64, τ::Float64, K::SVD)
u = K.U[:, end]
v = K.V[:, end]
σ = K.S[end]
# follow the level set
grad = v' * u
ort = im * grad
z = z + τ * ort / abs(ort)
return z, σ
end
function _newton_step(z, K::SVD, ϵ)
u = K.U[:, end]
v = K.V[:, end]
σ = K.S[end]
# gradient descent, better estimate
z = z + (σ - ϵ) / (u' * v)
return z, σ
end
# function newton_level_set(z, T, ϵ; τ=ϵ / 16)
# K = svd(z * I - T)
# return _newton_step(z, K, ϵ, τ)
# end
function _compute_enclosure_eigval(T, λ, ϵ; max_initial_newton, max_steps, rel_steps)
@info "Enclosing ", λ
@info "Level set", ϵ
eigvals = diag(T)
out_z = []
out_bound = []
radiuses = []
log_z = []
z = λ + 4 * sign(real(λ)) * ϵ
# we first use the newton method to approach the level set
for j in 1:max_initial_newton
K = svd(T - z * I)
z, σ = _newton_step(z, K, ϵ)
if (σ - ϵ) < ϵ / 256
break
end
end
# for j in 1:max_initial_newton
# K = svd(T - z * I)
# τ = minimum(abs.(eigvals .- z))/rel_steps
# z, σ = _follow_level_set(z, τ, K)
# z, σ = _newton_step(z, K, ϵ, τ)
# end
z0 = z
r_guaranteed_1 = 0.0
#push!(out_z, z)
#push!(log_z, log(z - λ))
for t_step in 1:max_steps
#@info t_step, max_steps
z_old = z
K = svd(T - z * I)
τ = minimum(abs.(eigvals .- z)) / rel_steps
z, σ = _follow_level_set(z, τ, K)
z, σ = _newton_step(z, K, ϵ)
# @info σ
push!(out_z, z)
push!(log_z, log(z - λ))
r_guaranteed = 5 * abs(z_old - z) / 8
if t_step == 1
r_guaranteed_1 = r_guaranteed
end
# we certify the SVD on a ball around z_old
z_ball = Ball(z_old, r_guaranteed)
bound = _certify_svd(BallMatrix(T) - z_ball * I, K)[end]
push!(out_bound, bound)
push!(radiuses, r_guaranteed)
# if the first point is inside the certification ball, we have found a loop closure
#@info "r_guaranteed+r_guaranteed_1", r_guaranteed+r_guaranteed_1, "dist to start", abs(z_old-z0)
angle = 0.0
if length(log_z) > 2
test = [log_z[i + 1] - log_z[i] for i in 1:(length(log_z) - 1)]
angle = imag(sum(test))
end
@info angle
check_loop_closure = abs(z_old - z0) < (r_guaranteed + r_guaranteed_1)
if t_step > 10 && check_loop_closure
@info t_step, "Loop closure"
break
end
end
return Enclosure(λ, out_z, out_bound, radiuses, true)
end
# function _certify_circle(T, r1, r, ϵ)
# out_z = []
# out_bound = []
# N = ceil(2*π*r1/(r*ϵ))
# @info N
# dθ = 2*π/N
# z = r1*exp(0)
# for i in 0:N
# z_old = z
# z = r1*exp(im*i*dθ)
# K = svd(T - z * I)
# push!(out_z, z)
# z_ball = Ball(z_old, 1.5 * abs(z_old - z))
# bound = _certify_svd(BallMatrix(T) - z_ball * I, K)[end]
# push!(out_bound, bound)
# end
# return (out_z, out_bound)
# end
"""
compute_enclosure(A::BallMatrix, r1, r2, ϵ; max_initial_newton = 30,
max_steps = Int64(ceil(256 * π)), rel_steps = 16)
Given a BallMatrix `A`, this method follows the level lines of level `ϵ`
around the eigenvalues with modulus bound between `r1` and `r2`.
The keyword arguments
- max_initial_newton: maximum number of newton steps to reach the level lines
- max_steps: maximum number of steps following the contour
- rel_steps: relative integration step for the Euler method
The method outputs an array of truples:
- the first element is the eigenvalue we are enclosing
(in the case of the excluding circles, it is 0.0 or the maximum modulus of the eigenvalues)
- the second element is an upper bound on the resolvent norm
- the third element is a list of points on the enclosing line; the resolvent is rigorously
bound on circles centered at each point and of radius 5/8 the distance to the previous point
"""
function compute_enclosure(A::BallMatrix, r1, r2, ϵ; max_initial_newton = 30,
max_steps = Int64(ceil(256 * π)), rel_steps = 16)
F = schur(Complex{Float64}.(A.c))
bZ = BallMatrix(F.Z)
errF = svd_bound_L2_opnorm(bZ' * bZ - I)
bT = BallMatrix(F.T)
errT = svd_bound_L2_opnorm(bZ * bT * bZ' - A)
@info "Schur unitary error", errF
@info "Schur reconstruction error", errT
eigvals = diag(F.T)[[r1 < abs(x) < r2 for x in diag(F.T)]]
@info "Certifying around", eigvals
output = []
for λ in eigvals
E = _compute_enclosure_eigval(F.T, λ, ϵ; max_initial_newton,
max_steps, rel_steps)
# bound, i = findmax([@up 1.0 / (@down x.c - x.r) for x in bounds])
# if bound < 0.0
# @warn "Smaller rel_step required"
# end
# @info "resolvent upper bound", bound
# @info "σ", bounds[i]
push!(output, E)
end
# encloses the eigenvalues inside r1
eigvals_smaller_than_r1 = diag(F.T)[[abs(x) < r1 for x in diag(F.T)]]
if !isempty(eigvals_smaller_than_r1)
@info "Computing exclusion circle ", r1
E = _compute_exclusion_set(F.T, r1; max_steps, rel_steps)
#bound, i = findmax([@up 1.0 / (@down x.c - x.r) for x in bounds])
#@info bound, i
#@info "σ", bounds[i]
@info bound_resolvent(E)
push!(output, E)
end
# # encloses the eigenvalues outside r2
eigvals_bigger_than_r2 = diag(F.T)[[abs(x) > r2 for x in diag(F.T)]]
if !isempty(eigvals_bigger_than_r2)
@info "Computing exclusion circle ", r2
E = _compute_exclusion_set(F.T, r2; max_steps, rel_steps)
#max_abs_eigenvalue = maximum(abs.(diag(F.T)))
#bound, i = findmax([@up 1.0 / (@down x.c - x.r) for x in bounds])
#@info bound, i
#@info "σ", bounds[i]
push!(output, E)
# r = minimum([abs(λ)-r2 for λ in eigvals_bigger_than_r2])/5
# @info "Gap between r2, $r2 and smallest eigenvalue outside, $r"
# curve, bound = _certify_circle(F.T, r2, r, ϵ)
# push!(output, (max_abs_eigenvalue, curve, bound))
end
return output
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 357 | const ϵp = 2.0^-52
const η = 2.0^-1074
const op_up = Dict(:+ => :add_up, :- => :sub_up, :* => :mul_up, :/ => :div_up)
macro up(ex)
esc(MacroTools.postwalk(x -> get(op_up, x, x), ex))
end
const op_down = Dict(:+ => :add_down, :- => :sub_down, :* => :mul_down, :/ => :div_down)
macro down(ex)
esc(MacroTools.postwalk(x -> get(op_up, x, x), ex))
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 1474 | """
svdbox
This follows Theorem 3.1 in Ref. [Rump2011](@cite)
# References
* [Rump2011](@cite) Rump S., BIT 51, 2 (2011)
"""
function svdbox(A::BallMatrix{T}) where {T}
svdA = svd(A.c)
return _certify_svd(A, svdA)
end
function _certify_svd(A::BallMatrix{T}, svdA::SVD) where {T}
U = BallMatrix(svdA.U)
Vt = BallMatrix(svdA.Vt)
Σ = BallMatrix(Diagonal(svdA.S))
V = BallMatrix(svdA.V)
E = U * Σ * Vt - A
normE = collatz_upper_bound_L2_opnorm(E)
@debug "norm E" normE
F = Vt * V - I
normF = collatz_upper_bound_L2_opnorm(F)
@debug "norm F" normF
G = U' * U - I
normG = collatz_upper_bound_L2_opnorm(G)
@debug "norm G" normG
@assert normF<1 "It is not possible to verify the singular values with this precision"
@assert normG<1 "It is not possible to verify the singular values with this precision"
den_down = @up (1.0 + normF) * (1.0 + normG)
den_up = @down (1.0 - normF) * (1.0 - normG)
svdbounds_down = setrounding(T, RoundDown) do
[(σ - normE) / den_down for σ in svdA.S]
end
svdbounds_up = setrounding(T, RoundUp) do
[(σ + normE) / den_up for σ in svdA.S]
end
midpoints = (svdbounds_down + svdbounds_up) / 2
rad = setrounding(T, RoundUp) do
[max(svdbounds_up[i] - midpoints[i], midpoints[i] - svdbounds_down[i])
for i in 1:length(midpoints)]
end
return [Ball(midpoints[i], rad[i]) for i in 1:length(midpoints)]
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 543 | struct BallArray{T<:AbstractFloat,N<:Integer,NT<:Union{T,Complex{T}},BT<:Ball{T,NT},CA<:AbstractArray{NT,N},RA<:AbstractArray{T,N}} <: AbstractArray{BT,N}
c::CA
r::RA
function BallArray(c::AbstractArray{T,N}, r::AbstractArray{T,N}) where {T<:AbstractFloat, N<:Integer}
new{T,N,T,Ball{T,T},typeof(c),typeof(r)}(c, r)
end
function BallArray(c::AbstractArray{Complex{T},N}, r::AbstractArray{T,N}) where {T<:AbstractFloat, N<:Integer}
new{T,N,Complex{T},Ball{T,Complex{T}},typeof(c),typeof(r)}(c, r)
end
end | BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 2397 | struct Ball{T <: AbstractFloat, CT <: Union{T, Complex{T}}} <: Number
c::CT
r::T
end
BallF64 = Ball{Float64, Float64}
BallComplexF64 = Ball{Float64, ComplexF64}
±(c, r) = Ball(c, r)
Ball(c, r) = Ball(float(c), float(r))
Ball(c::T) where {T <: Number} = Ball(float(c), zero(float(real(T))))
Ball(x::Ball) = x
mid(x::Ball) = x.c
rad(x::Ball) = x.r
mid(x::Number) = x
rad(::T) where {T <: Number} = zero(float(real(T)))
midtype(::Ball{T, CT}) where {T, CT} = CT
radtype(::Ball{T, CT}) where {T, CT} = T
midtype(::Type{Ball{T, CT}}) where {T, CT} = CT
radtype(::Type{Ball{T, CT}}) where {T, CT} = T
midtype(::Type{Ball}) = Float64
radtype(::Type{Ball}) = Float64
Base.show(io::IO, ::MIME"text/plain", x::Ball) = print(io, x.c, " ± ", x.r)
###############
# CONVERSIONS #
###############
function Base.convert(::Type{Ball{T, CT}}, x::Ball) where {T, CT}
Ball(convert(CT, mid(x)), convert(T, rad(x)))
end
Base.convert(::Type{Ball{T, CT}}, c::Number) where {T, CT} = Ball(convert(CT, c), zero(T))
Base.convert(::Type{Ball}, c::Number) = Ball(c)
#########################
# ARITHMETIC OPERATIONS #
#########################
Base.:+(x::Ball) = x
Base.:-(x::Ball) = Ball(-x.c, x.r)
for op in (:+, :-)
@eval function Base.$op(x::Ball, y::Ball)
c = $op(mid(x), mid(y))
r = @up (ϵp * abs(c) + rad(x)) + rad(y)
Ball(c, r)
end
end
function Base.:*(x::Ball, y::Ball)
c = mid(x) * mid(y)
r = @up (η + ϵp * abs(c)) + ((abs(mid(x)) + rad(x)) * rad(y) + rad(x) * abs(mid(y)))
Ball(c, r)
end
# TODO: this probably is incorrect for complex balls
function Base.inv(y::Ball{<:AbstractFloat})
my, ry = mid(y), rad(y)
ry < abs(my) || throw(ArgumentError("Ball $y contains zero."))
c1 = @down 1.0 / (abs(my) + ry)
c2 = @up 1.0 / (abs(my) - ry)
c = @up c1 + 0.5 * (c2 - c1)
r = @up c - c1
Ball(copysign(c, my), r)
end
Base.:/(x::Ball, y::Ball) = x * inv(y)
# Base.abs(x::Ball) = Ball(max(0, sub_down(abs(mid(x)), rad(x))), add_up(abs(mid(x)), rad(x)))
#
function Base.abs(x::Ball)
if abs(x.c) > x.r
return Ball(abs(x.c), x.r)
else
val = add_up(abs(x.c), x.r) / 2
return Ball(val, val)
end
end
Base.conj(x::Ball) = Ball(conj(x.c), x.r)
Base.in(x::Number, B::Ball) = abs(B.c - x) <= B.r
function Base.inv(x::Ball{T, Complex{T}}) where {T <: AbstractFloat}
return conj(x) / (abs(x)^2)
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 500 | #promote_type(Type{Ball{T, T}}, Type{Ball{T, Complex{T}}) where{T} = Ball{T, Complex{T}}
Base.promote_rule(::Type{Ball{T, T}}, ::Type{Int64}) where {T} = Ball{T, T}
Base.promote_rule(::Type{Ball{T, T}}, ::Type{T}) where {T} = Ball{T, T}
Base.promote_rule(::Type{Ball{T, Complex{T}}}, ::Type{Ball{T, T}}) where {T} = Ball{T,Complex{T}}
Base.promote_rule(::Type{Ball{T, T}}, ::Type{Complex{T}}) where {T} = Ball{T,Complex{T}}
# we should implement conversion rules also for BigInt and BigFloat... | BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 8027 | struct BallMatrix{T <: AbstractFloat, NT <: Union{T, Complex{T}}, BT <: Ball{T, NT},
CM <: AbstractMatrix{NT}, RM <: AbstractMatrix{T}} <: AbstractMatrix{BT}
c::CM
r::RM
function BallMatrix(c::AbstractMatrix{T},
r::AbstractMatrix{T}) where {T <: AbstractFloat}
new{T, T, Ball{T, T}, typeof(c), typeof(r)}(c, r)
end
function BallMatrix(c::AbstractMatrix{Complex{T}},
r::AbstractMatrix{T}) where {T <: AbstractFloat}
new{T, Complex{T}, Ball{T, Complex{T}}, typeof(c), typeof(r)}(c, r)
end
end
BallMatrix(M::AbstractMatrix) = BallMatrix(mid.(M), rad.(M))
mid(A::AbstractMatrix) = A
rad(A::AbstractMatrix) = zeros(eltype(A), size(A))
# mid(A::BallMatrix) = map(mid, A)
# rad(A::BallMatrix) = map(rad, A)
mid(A::BallMatrix) = A.c
rad(A::BallMatrix) = A.r
# Array interface
Base.eltype(::BallMatrix{T, NT, BT}) where {T, NT, BT} = BT
Base.IndexStyle(::Type{<:BallMatrix}) = IndexLinear()
Base.size(M::BallMatrix, i...) = size(M.c, i...)
function Base.getindex(M::BallMatrix, i::Int64)
return Ball(getindex(M.c, i), getindex(M.r, i))
end
function Base.getindex(M::BallMatrix, I::CartesianIndex{1})
return Ball(getindex(M.c, I), getindex(M.r, I))
end
function Base.getindex(M::BallMatrix, i::Int64, j::Int64)
return Ball(getindex(M.c, i, j), getindex(M.r, i, j))
end
function Base.getindex(M::BallMatrix, I::CartesianIndex{2})
return Ball(getindex(M.c, I), getindex(M.r, I))
end
function Base.getindex(M::BallMatrix, inds...)
return BallMatrix(getindex(M.c, inds...), getindex(M.r, inds...))
end
function Base.display(X::BallMatrix{
T, NT, Ball{T, NT}, Matrix{NT},
Matrix{T}}) where {T <: AbstractFloat, NT <: Union{T, Complex{T}}}
#@info "test"
m, n = size(X)
B = [Ball(X.c[i, j], X.r[i, j]) for i in 1:m, j in 1:n]
display(B)
end
function Base.setindex!(M::BallMatrix, x, inds...)
setindex!(M.c, mid(x), inds...)
setindex!(M.r, rad(x), inds...)
end
Base.copy(M::BallMatrix) = BallMatrix(copy(M.c), copy(M.r))
function Base.zeros(::Type{B}, dims::NTuple{N, Integer}) where {B <: Ball, N}
BallMatrix(zeros(midtype(B), dims), zeros(radtype(B), dims))
end
function Base.ones(::Type{B}, dims::NTuple{N, Integer}) where {B <: Ball, N}
BallMatrix(ones(midtype(B), dims), zeros(radtype(B), dims))
end
# LinearAlgebra functions
function LinearAlgebra.adjoint(M::BallMatrix)
return BallMatrix(mid(M)', rad(M)')
end
# Operations
for op in (:+, :-)
@eval function Base.$op(A::BallMatrix{T}, B::BallMatrix{T}) where {T <: AbstractFloat}
mA, rA = mid(A), rad(A)
mB, rB = mid(B), rad(B)
C = $op(mA, mB)
R = setrounding(T, RoundUp) do
R = (ϵp * abs.(C) + rA) + rB
end
BallMatrix(C, R)
end
end
function Base.:*(lam::Number, A::BallMatrix{T}) where {T}
B = LinearAlgebra.copymutable_oftype(A.c,
Base._return_type(+,
Tuple{eltype(A.c), typeof(lam)}))
B = lam * A.c
R = setrounding(T, RoundUp) do
return (η .+ ϵp * abs.(B)) + (A.r * abs(mid(lam)))
end
return BallMatrix(B, R)
end
function Base.:*(lam::Ball{T, NT}, A::BallMatrix{T}) where {T, NT <: Union{T, Complex{T}}}
B = LinearAlgebra.copymutable_oftype(A.c,
Base._return_type(+,
Tuple{eltype(A.c),
typeof(mid(lam))}))
B = mid(lam) * A.c
R = setrounding(T, RoundUp) do
return (η .+ ϵp * abs.(B)) + ((abs.(A.c) + A.r) * rad(lam) + A.r * abs(mid(lam)))
end
return BallMatrix(B, R)
end
# function Base.:*(lam::NT, A::BallMatrix{T}) where {T, NT<:Union{T,Complex{T}}}
# B = LinearAlgebra.copymutable_oftype(A.c, Base._return_type(+, Tuple{eltype(A.c),typeof(mid(lam))}))
# B = lam * A.c
# R = setrounding(T, RoundUp) do
# return (η .+ ϵp * abs.(B)) + (A.r * abs(mid(lam)))
# end
# return BallMatrix(B, R)
# end
for op in (:+, :-)
@eval function Base.$op(A::BallMatrix{T}, B::Matrix{T}) where {T <: AbstractFloat}
mA, rA = mid(A), rad(A)
C = $op(mA, B)
R = setrounding(T, RoundUp) do
R = (ϵp * abs.(C) + rA)
end
BallMatrix(C, R)
end
# + and - are commutative
@eval function Base.$op(B::Matrix{T}, A::BallMatrix{T}) where {T <: AbstractFloat}
$op(A, B)
end
end
function Base.:+(A::BallMatrix{T}, J::UniformScaling) where {T}
LinearAlgebra.checksquare(A)
B = LinearAlgebra.copymutable_oftype(A.c,
Base._return_type(+,
Tuple{eltype(A.c), typeof(J)}))
R = copy(A.r)
@inbounds for i in axes(A, 1)
B[i, i] += J
end
R = setrounding(T, RoundUp) do
@inbounds for i in axes(A, 1)
R[i, i] += ϵp * abs(B[i, i])
end
return R
end
return BallMatrix(B, R)
end
function Base.:+(A::BallMatrix{T},
J::UniformScaling{Ball{T, NT}}) where {T, NT <: Union{T, Complex{T}}}
LinearAlgebra.checksquare(A)
B = LinearAlgebra.copymutable_oftype(A.c, Base._return_type(+, Tuple{eltype(A.c), NT}))
R = copy(A.r)
@inbounds for i in axes(A, 1)
B[i, i] += J.λ.c
end
R = setrounding(T, RoundUp) do
@inbounds for i in axes(A, 1)
R[i, i] += ϵp * abs(B[i, i]) + J.λ.r
end
return R
end
return BallMatrix(B, R)
end
function Base.:*(A::BallMatrix{T}, B::BallMatrix{T}) where {T <: AbstractFloat}
# mA, rA = mid(A), rad(A)
# mB, rB = mid(B), rad(B)
# C = mA * mB
# R = setrounding(T, RoundUp) do
# R = abs.(mA) * rB + rA * (abs.(mB) + rB)
# end
# BallMatrix(C, R)
return MMul3(A, B)
end
function Base.:*(A::BallMatrix{T}, B::Matrix{T}) where {T <: AbstractFloat}
return MMul3(A, B)
end
function Base.:*(A::Matrix{T}, B::BallMatrix{T}) where {T <: AbstractFloat}
return MMul3(A, B)
end
# TODO: Should we implement this?
# From Theveny https://theses.hal.science/tel-01126973/en
function MMul2(A::BallMatrix{T}, B::BallMatrix{T}) where {T <: AbstractFloat}
@warn "Not Implemented"
end
# As in Revol-Theveny
# Parallel Implementation of Interval Matrix Multiplication
# pag. 4
# please check the values of u and η
function MMul3(A::BallMatrix{T}, B::BallMatrix{T}) where {T <: AbstractFloat}
m, k = size(A)
mA, rA = mid(A), rad(A)
mB, rB = mid(B), rad(B)
mC = mA * mB
rC = setrounding(T, RoundUp) do
rprimeB = ((k + 2) * ϵp * abs.(mB) + rB)
rC = abs.(mA) * rprimeB + rA * (abs.(mB) + rB) .+ η / ϵp
end
BallMatrix(mC, rC)
end
function MMul3(A::BallMatrix{T}, B::Matrix{T}) where {T <: AbstractFloat}
m, k = size(A)
mA, rA = mid(A), rad(A)
mC = mA * B
rC = setrounding(T, RoundUp) do
rprimeB = ((k + 2) * ϵp * abs.(B))
rC = abs.(mA) * rprimeB + rA * (abs.(B)) .+ η / ϵp
end
BallMatrix(mC, rC)
end
function MMul3(A::Matrix{T}, B::BallMatrix{T}) where {T <: AbstractFloat}
m, k = size(A)
mB, rB = mid(B), rad(B)
mC = A * mB
rC = setrounding(T, RoundUp) do
rprimeB = ((k + 2) * ϵp * abs.(mB) + rB)
rC = abs.(A) * rprimeB .+ η / ϵp
end
BallMatrix(mC, rC)
end
function MMul3(A::Matrix{T}, B::Matrix{T}) where {T <: AbstractFloat}
m, k = size(A)
mC = A * B
rC = setrounding(T, RoundUp) do
rprimeB = ((k + 2) * ϵp * abs.(B))
rC = abs.(A) * rprimeB .+ η / ϵp
end
BallMatrix(mC, rC)
end
# As in Revol-Theveny
# Parallel Implementation of Interval Matrix Multiplication
# pag. 4
# please check the values of u and η
function MMul5(A::BallMatrix{T}, B::BallMatrix{T}) where {T <: AbstractFloat}
m, k = size(A)
mA, rA = mid(A), rad(A)
mB, rB = mid(B), rad(B)
ρA = sign.(mA) .* min.(abs.(mA), rA)
ρB = sign.(mB) .* min.(abs.(mB), rB)
mC = mA * mB + ρA * ρB
Γ = abs.(mA) * abs.(mB) + abs.(ρA) * abs.(ρB)
rC = setrounding(T, RoundUp) do
γ = (k + 1) * eps.(Γ) .+ 0.5 * η / ϵp
rC = (abs.(mA) + rA) * (abs.(mB) + rB) - Γ + 2γ
end
BallMatrix(mC, rC)
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 3738 | struct BallVector{T <: AbstractFloat, NT <: Union{T, Complex{T}}, BT <: Ball{T, NT},
CV <: AbstractVector{NT}, RV <: AbstractVector{T}} <: AbstractVector{BT}
c::CV
r::RV
function BallVector(c::AbstractVector{T},
r::AbstractVector{T}) where {T <: AbstractFloat}
new{T, T, Ball{T, T}, typeof(c), typeof(r)}(c, r)
end
function BallVector(c::AbstractVector{Complex{T}},
r::AbstractVector{T}) where {T <: AbstractFloat}
new{T, Complex{T}, Ball{T, Complex{T}}, typeof(c), typeof(r)}(c, r)
end
end
BallVector(v::AbstractVector) = BallVector(mid.(v), rad.(v))
mid(v::AbstractVector) = v
rad(v::AbstractVector) = zeros(eltype(v), length(v))
# mid(A::BallMatrix) = map(mid, A)
# rad(A::BallMatrix) = map(rad, A)
mid(A::BallVector) = A.c
rad(A::BallVector) = A.r
# Array interface
Base.eltype(::BallVector{T, NT, BT}) where {T, NT, BT} = BT
Base.IndexStyle(::Type{<:BallVector}) = IndexLinear()
Base.size(v::BallVector, i...) = size(v.c, i...)
Base.length(v::BallVector) = length(v.c)
function Base.getindex(M::BallVector, I::S) where {S <: Union{Int64, CartesianIndex{1}}}
return Ball(getindex(M.c, I), getindex(M.r, I))
end
function Base.getindex(M::BallVector, inds...)
return BallVector(getindex(M.c, inds...), getindex(M.r, inds...))
end
function Base.display(v::BallVector{
T, NT, Ball{T, NT}, Vector{NT},
Vector{T}}) where {T <: AbstractFloat, NT <: Union{T, Complex{T}}}
#@info "test"
m = length(v)
V = [Ball(v.c[i], v.r[i]) for i in 1:m]
display(V)
end
function Base.setindex!(M::BallVector, x, inds...)
setindex!(M.c, mid(x), inds...)
setindex!(M.r, rad(x), inds...)
end
Base.copy(M::BallVector) = BallVector(copy(M.c), copy(M.r))
function Base.zeros(::Type{B}, n::Integer) where {B <: Ball}
BallVector(zeros(midtype(B), n), zeros(radtype(B), n))
end
function Base.ones(::Type{B}, n::Integer) where {B <: Ball}
BallVector(ones(midtype(B), n), zeros(radtype(B), n))
end
# # LinearAlgebra functions
# function LinearAlgebra.adjoint(M::BallMatrix)
# return BallMatrix(mid(M)', rad(M)')
# end
# # Operations
for op in (:+, :-)
@eval function Base.$op(A::BallVector{T}, B::BallVector{T}) where {T <: AbstractFloat}
mA, rA = mid(A), rad(A)
mB, rB = mid(B), rad(B)
C = $op(mA, mB)
R = setrounding(T, RoundUp) do
R = (ϵp * abs.(C) + rA) + rB
end
BallVector(C, R)
end
end
function Base.:*(lam::Number, A::BallVector{T}) where {T}
B = LinearAlgebra.copymutable_oftype(A.c,
Base._return_type(+,
Tuple{eltype(A.c), typeof(lam)}))
B = lam * A.c
R = setrounding(T, RoundUp) do
return (η .+ ϵp * abs.(B)) + (A.r * abs(mid(lam)))
end
return BallVector(B, R)
end
function Base.:*(lam::Ball{T, NT}, A::BallVector{T}) where {T, NT <: Union{T, Complex{T}}}
B = LinearAlgebra.copymutable_oftype(A.c,
Base._return_type(+,
Tuple{eltype(A.c),
typeof(mid(lam))}))
B = mid(lam) * A.c
R = setrounding(T, RoundUp) do
return (η .+ ϵp * abs.(B)) + ((abs.(A.c) + A.r) * rad(lam) + A.r * abs(mid(lam)))
end
return BallVector(B, R)
end
function Base.:*(A::BallMatrix, v::Vector)
n = length(v)
bV = reshape(mid(v), (n, 1))
w = A * bV
wc = vec(mid(w))
wr = vec(rad(w))
return BallVector(wc, wr)
end
function Base.:*(A::BallMatrix, v::BallVector)
n = length(v)
vc = reshape(mid(v), (n, 1))
vr = reshape(rad(v), (n, 1))
B = BallMatrix(vc, vr)
w = MMul3(A, B)
wc = vec(mid(w))
wr = vec(rad(w))
return BallVector(wc, wr)
end
Base.:*(A::AbstractMatrix, v::BallVector) = BallMatrix(A) * v
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 710 | using BallArithmetic
using Test
@testset "BallArithmetic.jl" begin
include("test_ball/test_ball.jl")
include("test_types/test_constructors.jl")
include("test_matrix_classifier/test_matrix_classifier.jl")
include("test_types/test_algebra.jl")
include("test_types/test_vector.jl")
include("test_types/test_vector_operations.jl")
include("test_eigen/test_eigen.jl")
include("test_interval_arithmetic_ext/test_interval_arithmetic_ext.jl")
include("test_pseudospectra/test_pseudospectra.jl")
include("test_fft/test_fft.jl")
include("test_norm_bounds/test_norm_bounds.jl")
include("test_svd/test_svd.jl")
include("test_numerical_test/test_numerical_test.jl")
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 1064 | @testset "Ball Arithmetic" begin
x = Ball(0.0, 0.5)
@test in(0, x) == true
x = abs(x)
@test x.c == 0.25 && x.r == 0.25
x = 0.0 ± 1.0
@test x.c == 0.0 && x.r == 1.0
x = Ball(1.0 + im, 2.0)
@test in(0, x) == true
import IntervalArithmetic
v = rand(Float64, 1024)
iv = IntervalArithmetic.interval.(v)
bv = Ball.(v)
w = rand(1024)
iw = IntervalArithmetic.interval.(w)
bw = Ball.(w)
isum = iv + iw
lower = [IntervalArithmetic.inf(x) for x in isum]
higher = [IntervalArithmetic.sup(x) for x in isum]
bsum = bv + bw
@test all(in.(lower, bsum)) && all(in.(higher, bsum))
iprod = iv .* iw
lower = [IntervalArithmetic.inf(x) for x in iprod]
higher = [IntervalArithmetic.sup(x) for x in iprod]
bprod = bv .* bw
@test all(in.(lower, bprod)) && all(in.(higher, bprod))
x = Ball(1.0, 1 / 4) #interval [3/4, 5/4]
t = inv(x) # the inverse is [4/5, 4/3]
@test 4 / 5 ∈ t
@test 4 / 3 ∈ t
x = Ball(rand() + im * rand())
@test 1.0 ∈ x * inv(x)
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 365 | @testset "Eigvals" begin
using BallArithmetic
A = BallMatrix(rand(256, 256))
v = BallArithmetic.gevbox(A, A)
@test all([abs(v[i].c - 1.0) < v[i].r for i in 1:256])
bA = BallMatrix([125.0 0.0; 0.0 256.0])
@test BallArithmetic.collatz_upper_bound(bA) >= 256.0
v = BallArithmetic.evbox(bA)
@test abs(v[1].c - 125.0) < v[1].r
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 241 | dims = 1024
v = ones(dims)
v_rad = ones(dims) / 2^20
v = BallVector(v, v_rad)
fft_v = BallArithmetic.fft(v)
@test 1024 in fft_v[1]
A = BallMatrix(zeros(1024, 2))
A[:, 1] = v
fftA = BallArithmetic.fft(A, (1,))
@test 1024 in fftA[1, 1]
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 534 | import IntervalArithmetic
x = IntervalArithmetic.interval(-1.0, 1.0)
b = Ball(x)
@test b.c == 0.0 && b.r == 1.0
y = x + im * (x + 1.0)
b = Ball(y)
@test b.c == im && b.r >= sqrt(2)
A = fill(x, (2, 2))
B = BallMatrix(A)
@test all([x == 0.0 for x in B.c]) && all(x == 1.0 for x in B.r)
A = A + im * (A .+ 1.0)
B = BallMatrix(A)
@test all([x == im for x in B.c]) && all(x >= sqrt(2) for x in B.r)
b = Ball(0.0, 1.0)
x = IntervalArithmetic.interval(b)
@test IntervalArithmetic.inf(x) == -1.0 && IntervalArithmetic.sup(x) == 1.0
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 477 | @testset "Matrix classifier" begin
using BallArithmetic
bA = BallMatrix([1.0 0.0; 0.0 1.0])
@test BallArithmetic.is_M_matrix(bA) == true
bA = BallMatrix([1.0 0.1; 0.1 1.0])
bB = BallArithmetic.off_diagonal_abs(bA)
@test bB.c == [0.0 0.1; 0.1 0.0]
v = BallArithmetic.diagonal_abs_lower_bound(bA)
@test v == [1.0; 1.0]
ρ = BallArithmetic.collatz_upper_bound(bB)
@test ρ >= 0.1
@test BallArithmetic.is_M_matrix(bA) == true
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 926 | @testset "Norm bounds" begin
A = zeros(1024, 1024) + I
bA = BallMatrix(A)
@test BallArithmetic.upper_bound_L1_opnorm(bA) >= 1.0
@test BallArithmetic.upper_bound_L_inf_opnorm(bA) >= 1.0
@test BallArithmetic.upper_bound_L2_opnorm(bA) >= 1.0
@test BallArithmetic.svd_bound_L2_opnorm(bA) >= 1.0
@test BallArithmetic.svd_bound_L2_opnorm_inverse(bA) >= 1.0
@test BallArithmetic.svd_bound_L2_resolvent(bA, Ball(0.5)) >= 2.0
bA = bA + Ball(0.0, 1 / 16) * I
@test BallArithmetic.upper_bound_L1_opnorm(bA) >= 1.0 + 1 / 16
@test BallArithmetic.upper_bound_L_inf_opnorm(bA) >= 1.0 + 1 / 16
@test BallArithmetic.upper_bound_L2_opnorm(bA) >= 1.0 + 1 / 16
@test BallArithmetic.svd_bound_L2_opnorm(bA) >= 1.0 + 1 / 16
@test BallArithmetic.svd_bound_L2_opnorm_inverse(bA) >= 1 / (1.0 - 1 / 16)
@test BallArithmetic.svd_bound_L2_resolvent(bA, Ball(0.5)) >= 1 / (0.5 - 1 / 16)
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 396 | @testset "Test Numerical Test" begin
A = BallArithmetic.NumericalTest._test_matrix(4)
@test A == [1.0 0 0 2^(-53);
0 1.0 0 2^(-53);
0 0 1.0 2^(-53);
0 0 0 2^(-53)]
# we test the singlethread version, so that CI points
# out if setting rounding modes is broken
@test BallArithmetic.NumericalTest.rounding_test(1, 2) == true
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 1424 | @testset "pseudospectra" begin
A = [1.0 0.0; 0.0 -1.0]
bA = BallMatrix(A)
using LinearAlgebra
K = svd(A)
@test BallArithmetic._follow_level_set(0.5 + im * 0, 0.01, K) == (0.5 - 0.01im, 1.0)
enc = BallArithmetic.compute_enclosure(bA, 0.0, 2.0, 0.01)
@test enc[1].λ == 1.0 + 0.0 * im
@test BallArithmetic.bound_resolvent(enc[1]) >= 100
@test all(abs.(enc[1].points .- 1.0) .<= 0.02)
A = [1.0 0.0; 0.0 -1.0]
bA = BallMatrix(A)
enc = BallArithmetic.compute_enclosure(bA, 2.0, 3.0, 0.01)
@test enc[1].λ == 0.0
@test BallArithmetic.bound_resolvent(enc[1]) >= 1
@test all(abs.(enc[1].points) .- 2.0 .<= 0.02)
enc = BallArithmetic.compute_enclosure(bA, 0.0, 0.1, 0.01)
@test enc[1].λ == 0.0
@test BallArithmetic.bound_resolvent(enc[1]) >= 1.0
@test all(abs.((enc[1].points)) .- 0.1 .<= 0.02)
E = BallArithmetic._compute_exclusion_circle_level_set_priori(A,
1.0,
0.01;
rel_pearl_size = 1 / 64,
max_initial_newton = 16)
@test all([abs(E.points[i + 1] - E.points[i]) for i in 1:(length(E.points) - 1)] .<
2 * E.radiuses[1])
@test BallArithmetic.bound_resolvent(E) > 100
E = BallArithmetic._compute_exclusion_circle_level_set_ode(A,
1.0,
0.01; max_initial_newton = 16,
max_steps = 1000,
rel_steps = 16)
@test BallArithmetic.bound_resolvent(E) > 100
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 854 | @testset "verify singular value perron frobenius " begin
A = [0.5 0.5; 0.3 0.7]
ρ = BallArithmetic.collatz_upper_bound_L2_opnorm(BallMatrix(A))
@test 1 ≤ ρ
end
@testset "verified svd" begin
mA = [1.0 0 0 0 2; 0 0 3 0 0; 0 0 0 0 0; 0 2 0 0 0; 0 0 0 0 0]
rA = zeros(size(mA))
rA[1, 1] = 0.1
A = BallMatrix(mA, rA)
Σ = BallArithmetic.svdbox(A)
@test abs(3 - Σ[1].c) < Σ[1].r
@test abs(sqrt(5) - (Σ[2].c)) < Σ[2].r
@test abs(2 - Σ[3].c) < Σ[3].r
@test abs(Σ[4].c) < Σ[4].r
@test abs(Σ[5].c) < Σ[5].r
A = im * A
# Σ = IntervalLinearAlgebra.svdbox(A, IntervalLinearAlgebra.R1())
# @test all([abs(3-Σ[1].c)< Σ[1].r;
# sqrt(abs(5-(Σ[2].c)^2)< Σ[2].r);
# abs(2-Σ[3].c)< Σ[3].r;
# abs(Σ[4].c)< Σ[4].r;
# abs(Σ[5].c)< Σ[5].r])
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 2330 | @testset "Test Matrix Algebra" begin
import IntervalArithmetic
A = rand(4, 4)
err = rand(4, 4)
ierr = IntervalArithmetic.interval(-2^-16, 2^-16) * err
rA = 2^16 * err
iA = IntervalArithmetic.interval.(A) .+ ierr
bA = BallMatrix(A, rA)
Iλ = 1 + 2^(-10) * IntervalArithmetic.interval(-1, 1)
bλ = Ball(1, 2^(-10))
IB = Iλ * iA
bB = bλ * bA
lower = [IntervalArithmetic.inf(x) for x in IB]
higher = [IntervalArithmetic.sup(x) for x in IB]
@test all(in.(lower, bB)) && all(in.(higher, bB))
B = rand(4, 4)
iB = IntervalArithmetic.interval.(B)
bB = BallMatrix(B)
isum = iA + iB
lower = [IntervalArithmetic.inf(x) for x in isum]
higher = [IntervalArithmetic.sup(x) for x in isum]
bsum = bA + bB
@test all(in.(lower, bsum))
@test all(in.(higher, bsum))
B = rand(4, 4)
iB = IntervalArithmetic.interval.(B)
isum = iA + iB
lower = [IntervalArithmetic.inf(x) for x in isum]
higher = [IntervalArithmetic.sup(x) for x in isum]
bsum = bA + B
@test all(in.(lower, bsum))
@test all(in.(higher, bsum))
bB = BallMatrix(B)
iprod = iA * iB
bprod = bA * bB
lower = [IntervalArithmetic.inf(x) for x in iprod]
higher = [IntervalArithmetic.sup(x) for x in iprod]
@test all(in.(lower, bprod))
@test all(in.(higher, bprod))
iprod = A * iB
bprod = A * bB
lower = [IntervalArithmetic.inf(x) for x in iprod]
higher = [IntervalArithmetic.sup(x) for x in iprod]
@test all(in.(lower, bprod))
@test all(in.(higher, bprod))
iprod = iB * A
bprod = bB * A
lower = [IntervalArithmetic.inf(x) for x in iprod]
higher = [IntervalArithmetic.sup(x) for x in iprod]
@test all(in.(lower, bprod))
@test all(in.(higher, bprod))
using LinearAlgebra
A = zeros(Ball{Float64, Float64}, (16, 16))
lam = Ball(1 / 8, 1 / 8)
B = A - lam * I
#TODO diag does not seem to work on BallMatrices
@test all(-lam.c .== diag(B.c)) && all(lam.r .<= diag(B.r))
lam = Ball(im * 1 / 8, 1 / 8)
B = A - lam * I
@test all(-lam.c .== diag(B.c)) && all(lam.r .<= diag(B.r))
A = rand(4, 4)
B = rand(4, 4)
bC = BallArithmetic.MMul3(A, B)
bC2 = BallMatrix(A) * BallMatrix(B)
@test bC.c == bC2.c && bC.r == bC2.r
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 937 |
@testset "Constructors" begin
A = rand(1024, 1024)
@test mid(A) == A
@test rad(A) == zeros(1024, 1024)
bA = BallMatrix(A)
@test bA.c == A
@test bA.r == zeros(1024, 1024)
reduced = bA[2:end, 2:end]
@test reduced.c == A[2:end, 2:end]
bA = BallMatrix(A, A)
@test bA.c == A
@test bA.r == A
@test Base.eltype(bA) == Ball{Float64, Float64}
@test mid(bA[1, 2]) == A[1, 2]
@test rad(bA[1, 2]) == A[1, 2]
bA = BallMatrix(A + im * A, A)
@test bA.c == A + im * A
@test bA.r == A
@test Base.eltype(bA) == Ball{Float64, Complex{Float64}}
A = zeros(BallF64, (16, 8))
@test A.c == zeros(Float64, (16, 8))
@test A.r == zeros(Float64, (16, 8))
B = ones(BallF64, (8, 4))
@test B.c == ones(Float64, (8, 4))
@test B.r == zeros(Float64, (8, 4))
A[1:8, 1:4] = B
@test A.c[1:8, 1:4] == ones((8, 4))
@test A.r[1:8, 1:4] == zeros((8, 4))
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 614 | @testset "BallVectors" begin
v = ones(128)
@test mid(v) == v
@test rad(v) == zeros(128)
bv = ones(BallF64, 128)
@test mid(bv) == v
@test rad(bv) == zeros(128)
bv = zeros(BallF64, 128)
@test mid(bv) == zeros(128)
@test rad(bv) == zeros(128)
vr = 2^(-10) * ones(128)
bv = BallVector(v, vr)
@test mid(bv) == v
@test rad(bv) == vr
@test eltype(bv) == BallF64
@test length(bv) == 128
reduced = bv[1:5]
@test mid(reduced) == v[1:5]
@test rad(reduced) == rad(bv)[1:5]
w = rand(5)
bv[1:5] = BallVector(w)
@test bv.c[1:5] == w
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | code | 3269 | @testset "Test Vector Operations" begin
import IntervalArithmetic
A = rand(4, 4)
err = rand(4, 4)
ierr = IntervalArithmetic.interval(-2^-16, 2^-16) * err
rA = 2^16 * err
iA = IntervalArithmetic.interval.(A) .+ ierr
bA = BallMatrix(A, rA)
v = ones(4)
iProd = iA * v
bProd = bA * v
lower = [IntervalArithmetic.inf(x) for x in iProd]
higher = [IntervalArithmetic.sup(x) for x in iProd]
@test all(in.(lower, bProd)) && all(in.(higher, bProd))
v = rand(4)
err = rand(4)
ierr = IntervalArithmetic.interval(-2^-16, 2^-16) * err
rv = 2^16 * err
iv = IntervalArithmetic.interval.(v) + ierr
bv = BallVector(v, rv)
iProd = iA * iv
bProd = bA * bv
lower = [IntervalArithmetic.inf(x) for x in iProd]
higher = [IntervalArithmetic.sup(x) for x in iProd]
@test all(in.(lower, bProd))
@test all(in.(higher, bProd))
iProd = A * iv
bProd = A * bv
lower = [IntervalArithmetic.inf(x) for x in iProd]
higher = [IntervalArithmetic.sup(x) for x in iProd]
@test all(in.(lower, bProd))
@test all(in.(higher, bProd))
vA = rand(4)
vB = rand(4)
iA = IntervalArithmetic.interval.(vA)
bA = BallVector(vA)
iB = IntervalArithmetic.interval.(vB)
bB = BallVector(vB)
isum = iA + iB
lower = [IntervalArithmetic.inf(x) for x in isum]
higher = [IntervalArithmetic.sup(x) for x in isum]
bsum = bA + bB
@test all(in.(lower, bsum))
@test all(in.(higher, bsum))
λ = 2 + 2^(-10)
IB = λ * iA
bB = λ * bA
lower = [IntervalArithmetic.inf(x) for x in IB]
higher = [IntervalArithmetic.sup(x) for x in IB]
@test all(in.(lower, bB))
@test all(in.(higher, bB))
Iλ = 1 + 2^(-10) * IntervalArithmetic.interval(-1, 1)
bλ = Ball(1, 2^(-10))
IB = Iλ * iA
bB = bλ * bA
lower = [IntervalArithmetic.inf(x) for x in IB]
higher = [IntervalArithmetic.sup(x) for x in IB]
@test all(in.(lower, bB))
@test all(in.(higher, bB))
# B = rand(4, 4)
# iB = IntervalArithmetic.interval.(B)
# isum = iA + iB
# lower = [x.lo for x in isum]
# higher = [x.hi for x in isum]
# bsum = bA + B
# @test all(in.(lower, bsum))
# @test all(in.(higher, bsum))
# bB = BallMatrix(B)
# iprod = iA * iB
# bprod = bA * bB
# lower = [x.lo for x in iprod]
# higher = [x.hi for x in iprod]
# @test all(in.(lower, bprod))
# @test all(in.(higher, bprod))
# iprod = A * iB
# bprod = A * bB
# lower = [x.lo for x in iprod]
# higher = [x.hi for x in iprod]
# @test all(in.(lower, bprod))
# @test all(in.(higher, bprod))
# iprod = iB * A
# bprod = bB * A
# lower = [x.lo for x in iprod]
# higher = [x.hi for x in iprod]
# @test all(in.(lower, bprod))
# @test all(in.(higher, bprod))
# using LinearAlgebra
# A = zeros(Ball{Float64, Float64}, (16, 16))
# lam = Ball(1 / 8, 1 / 8)
# B = A - lam * I
# #TODO diag does not seem to work on BallMatrices
# @test all(-lam.c .== diag(B.c))
# @test all(lam.r .<= diag(B.r))
# lam = Ball(im * 1 / 8, 1 / 8)
# B = A - lam * I
# @test all(-lam.c .== diag(B.c))
# @test all(lam.r .<= diag(B.r))
end
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | docs | 655 | # BallArithmetic
[](https://JuliaBallArithmetic.github.io/BallArithmetic.jl/stable/)
[](https://JuliaBallArithmetic.github.io/BallArithmetic.jl/dev/)
[](https://github.com/JuliaBallArithmetic/BallArithmetic.jl/actions/workflows/CI.yml?query=branch%3Amain)
[](https://codecov.io/gh/JuliaBallArithmetic/BallArithmetic.jl)
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | docs | 338 | We are interested in algorithms to compute rigorous enclosures
of eigenvalues.
We implement Ref. [Miyajima2012](@cite); the idea is to approach the problem
in two steps, the interested reader may refer to the treatment in [Eigenvalues in Arb](https://fredrikj.net/blog/2018/12/eigenvalues-in-arb/) for a deeper discussion on the topic.
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | docs | 761 | ```@meta
CurrentModule = BallArithmetic
```
# BallArithmetic
Documentation for [BallArithmetic](https://github.com/JuliaBallArithmetic/BallArithmetic.jl).
In this package we use the tecniques first introduced in Ref. [Rump1999](@cite), following the more recent work Ref. [RevolTheveny2013](@cite)
to implement a rigorous matrix product in mid-radius arithmetic.
This allows to implement numerous algorithms developed by Rump, Miyajima,
Ogita and collaborators to obtain a posteriori guaranteed bounds.
The main object are BallMatrices, i.e., a couple containing a center matrix and a radius matrix.
```@repl
using BallArithmetic
A = ones((2, 2))
bA = BallMatrix(A, A/128)
bA^2
```
```@autodocs
Modules = [BallArithmetic, NumericalTest]
```
| BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
|
[
"MIT"
] | 0.1.0 | f8a798d6511e7a7b3fd1dbccb85baf13ecacb6f9 | docs | 58 | # References
```@bibliography
```
```@bibliography
*
``` | BallArithmetic | https://github.com/JuliaBallArithmetic/BallArithmetic.jl.git |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.