licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 0.3.0 | 51bb25de518b4c62b7cdf26e5fbb84601bb27a60 | docs | 2888 | # CubedSphere.jl Documentation
## Conformal cubed sphere mapping
The conformal method for projecting the cube on the sphere was first described by [Rancic-etal-1996](@citet).
> Rančić et al., (1996). A global shallow-water model using an expanded spherical cube - Gnomonic versus conformal coordinates, _Quarterly Journal of the Royal Meteorological Society_, **122 (532)**, 959-982. doi:[10.1002/qj.49712253209](https://doi.org/10.1002/qj.49712253209)
Imagine a cube inscribed into a sphere. Using [`conformal_cubed_sphere_mapping`](@ref) we can map the face of the
cube onto the sphere. [`conformal_cubed_sphere_mapping`](@ref) maps the face that corresponds to the sphere's
sector that includes the North Pole, that is, the face of the cube is oriented normal to the ``z`` axis. This cube's
face is parametrized with orthogonal coordinates ``(x, y) \in [-1, 1] \times [-1, 1]`` with ``(x, y) = (0, 0)`` being
in the center of the cube's face, that is on the ``z`` axis.
We can visualize the mapping.
```@setup 1
using Rotations
using GLMakie
using CubedSphere
```
```@example 1
using GLMakie
using CubedSphere
N = 16
x = range(-1, 1, length=N)
y = range(-1, 1, length=N)
X = zeros(length(x), length(y))
Y = zeros(length(x), length(y))
Z = zeros(length(x), length(y))
for (j, y′) in enumerate(y), (i, x′) in enumerate(x)
X[i, j], Y[i, j], Z[i, j] = conformal_cubed_sphere_mapping(x′, y′)
end
fig = Figure(size = (800, 400))
ax2D = Axis(fig[1, 1],
aspect = 1,
title = "Cubed Sphere Panel")
ax3D = Axis3(fig[1, 2],
aspect = (1, 1, 1), limits = ((-1, 1), (-1, 1), (-1, 1)),
title = "Cubed Sphere Panel")
for ax in [ax2D, ax3D]
wireframe!(ax, X, Y, Z)
end
colsize!(fig.layout, 1, Auto(0.8))
colgap!(fig.layout, 40)
current_figure()
```
Above, we plotted the mapping from the cube's face onto the sphere both in a 2D projection (e.g., overlooking
the sphere down to its North Pole) and in 3D space.
We can then use [Rotations.jl](https://github.com/JuliaGeometry/Rotations.jl) to rotate the face of the
sphere that includes the North Pole and obtain all six faces of the sphere.
```@example 1
using Rotations
fig = Figure(resolution = (800, 400))
ax2D = Axis(fig[1, 1],
aspect = 1,
title = "Cubed Sphere")
ax3D = Axis3(fig[1, 2],
aspect = (1, 1, 1), limits = ((-1, 1), (-1, 1), (-1, 1)),
title = "Cubed Sphere")
identity = one(RotMatrix{3})
rotations = (RotY(π/2), RotX(-π/2), identity, RotY(-π/2), RotX(π/2), RotX(π))
for rotation in rotations
X′ = similar(X)
Y′ = similar(Y)
Z′ = similar(Z)
for I in CartesianIndices(X)
X′[I], Y′[I], Z′[I] = rotation * [X[I], Y[I], Z[I]]
end
wireframe!(ax2D, X′, Y′, Z′)
wireframe!(ax3D, X′, Y′, Z′)
end
colsize!(fig.layout, 1, Auto(0.8))
colgap!(fig.layout, 40)
current_figure()
```
| CubedSphere | https://github.com/CliMA/CubedSphere.jl.git |
|
[
"MIT"
] | 0.3.0 | 51bb25de518b4c62b7cdf26e5fbb84601bb27a60 | docs | 198 | # CubedSphere.jl Documentation
## Overview
CubedSphere.jl provides tools for generating cubed sphere grids. The package is developed by
the [Climate Modeling Alliance](https://clima.caltech.edu).
| CubedSphere | https://github.com/CliMA/CubedSphere.jl.git |
|
[
"MIT"
] | 0.3.0 | 51bb25de518b4c62b7cdf26e5fbb84601bb27a60 | docs | 35 | # References
```@bibliography
```
| CubedSphere | https://github.com/CliMA/CubedSphere.jl.git |
|
[
"MIT"
] | 0.3.0 | 51bb25de518b4c62b7cdf26e5fbb84601bb27a60 | docs | 102 | ### [Index](@id main-index)
```@index
Pages = ["public.md", "internals.md", "function_index.md"]
```
| CubedSphere | https://github.com/CliMA/CubedSphere.jl.git |
|
[
"MIT"
] | 0.3.0 | 51bb25de518b4c62b7cdf26e5fbb84601bb27a60 | docs | 161 | # Private types and functions
Documentation for `CubedSphere.jl`'s internal interface.
## CubedSphere
```@autodocs
Modules = [CubedSphere]
Public = false
```
| CubedSphere | https://github.com/CliMA/CubedSphere.jl.git |
|
[
"MIT"
] | 0.3.0 | 51bb25de518b4c62b7cdf26e5fbb84601bb27a60 | docs | 96 | ## Library Outline
```@contents
Pages = ["public.md", "internals.md", "function_index.md"]
```
| CubedSphere | https://github.com/CliMA/CubedSphere.jl.git |
|
[
"MIT"
] | 0.3.0 | 51bb25de518b4c62b7cdf26e5fbb84601bb27a60 | docs | 245 | # Public Documentation
Documentation for `CubedSphere.jl`'s public interface.
See the Internals section of the manual for internal package docs covering all submodules.
## CubedSphere
```@autodocs
Modules = [CubedSphere]
Private = false
```
| CubedSphere | https://github.com/CliMA/CubedSphere.jl.git |
|
[
"MIT"
] | 3.0.2 | f7736d86d7e8e80be3ac9685380e83e17dba3b81 | code | 3250 | module LaTeXDatax
using Unitful, UnitfulLatexify, Latexify
export @datax
"""
```julia
@datax
```
Print the arguments to a data file to be read by pgfkeys. Best used with the
[`datax` LaTeX package](https://ctan.org/pkg/datax). Variables can be supplied
either by name or as assignments, and meta-arguments are supplied using the
`:=` operator.
The meta-arguments include all the keyword arguments from `Latexify.jl` and
`UnitfulLatexify.jl` (for instance `unitformat := :siunitx`, `fmt := "%.2e"`),
as well as a few extra:
# Meta-arguments
* `filename`: The path to a file that will be written.
* `io`: an `IO` object to write to instead of a file. Overrides the `filename`
argument. If neither this nor `filename` is given, `stdout` is used.
* `permissions`: Defaults to `"w"`. Can be given as `"a"` to append to a file
instead of overwriting. Only meaningful with a `filename` argument.
# Examples
```julia
julia> a = 2;
julia> b = 3.2u"m";
julia> @datax a b c=3*a d=27 unitformat:=:siunitx
\\pgfkeyssetvalue{/datax/a}{\\num{2}}
\\pgfkeyssetvalue{/datax/b}{\\qty{3.2}{\\meter}}
\\pgfkeyssetvalue{/datax/c}{\\num{6}}
\\pgfkeyssetvalue{/datax/d}{\\num{27}}
```
"""
macro datax(args...)
return esc(datax_helper(args...))
end
function datax_helper(args...)
metaargs = Expr[]
names = Symbol[]
values = Any[]
for a in args
if a isa Symbol
push!(names, a)
push!(values, a)
continue
end
if a.head == :(=)
push!(names, a.args[1])
push!(values, a.args[2])
continue
end
if a.head == :(:=)
push!(metaargs, Expr(:kw, a.args[1], a.args[2]))
continue
end
error("I don't know what to do with argument $a")
end
names = Expr(:tuple, QuoteNode.(names)...)
values = Expr(:tuple, values...)
quote
LaTeXDatax.datax($names, $values; $(metaargs...))
nothing
end
end
function datax(names, values; kwargs...)
if haskey(kwargs, :io)
datax(kwargs[:io], names, values; kwargs...)
return nothing
end
if haskey(kwargs, :filename)
datax(kwargs[:filename], names, values; kwargs...)
return nothing
end
datax(stdout, names, values; kwargs...)
return nothing
end
datax(io::IO, names, values; kwargs...) = printkeyval.(Ref(io), names, values; kwargs...)
function datax(filename::String, names, values; permissions="w", kwargs...)
if haskey(kwargs, :overwrite)
if kwargs[:overwrite]
permissions = "w"
else
permissions = "a"
end
end
open(filename, permissions) do io
permissions == "w" &&
println(io, "% Autogenerated by LaTeXDatax.jl, will be overwritten")
datax(io, names, values; kwargs...)
end
end
function printkeyval(io::IO, name, value; kwargs...)
print(io, "\\pgfkeyssetvalue{/datax/", name, "}{")
printdata(io, value; kwargs...)
print(io, "}\n")
return nothing
end
printdata(io::IO, v::String; kwargs...) = print(io, v)
printdata(io::IO, v::Number; kwargs...) = print(io, latexify(v * u"one"; kwargs...))
printdata(io::IO, v; kwargs...) = print(io, latexify(v; kwargs...))
end # module
| LaTeXDatax | https://github.com/Datax-package/LaTeXDatax.jl.git |
|
[
"MIT"
] | 3.0.2 | f7736d86d7e8e80be3ac9685380e83e17dba3b81 | code | 1760 | using Unitful, LaTeXDatax, JuliaFormatter
using Test
# Supply "overwrite" as a commandline argument to overwrite in formatting step
overwrite = get(ARGS, 1, "") == "overwrite"
cd(@__DIR__)
@testset "LaTeXDatax.jl" begin
io = IOBuffer()
# Basic data printing
LaTeXDatax.printdata(io, "String")
@test String(take!(io)) == "String"
LaTeXDatax.printdata(io, 1.25)
@test String(take!(io)) == "\$1.25\$"
LaTeXDatax.printdata(io, 3.141592; fmt="%.2f", unitformat=:siunitx)
@test String(take!(io)) == "\\num{3.14}"
# keyval printing
LaTeXDatax.printkeyval(io, :a, 612.2u"nm")
@test String(take!(io)) == "\\pgfkeyssetvalue{/datax/a}{\$612.2\\;\\mathrm{nm}\$}\n"
# complete macro
a = 2
b = 3.2u"m"
@datax a b c = 3 * a d = 27 unitformat := :siunitx io := io
@test String(take!(io)) == """
\\pgfkeyssetvalue{/datax/a}{\\num{2}}
\\pgfkeyssetvalue{/datax/b}{\\qty{3.2}{\\meter}}
\\pgfkeyssetvalue{/datax/c}{\\num{6}}
\\pgfkeyssetvalue{/datax/d}{\\num{27}}
"""
# Write to file
rm.(("data.tex", "test.pdf", "test.log"); force=true)
@datax a b c = 3 * a d = 27 unitformat := :siunitx filename := "data.tex"
@test isfile("data.tex")
@test_nowarn run(`pdflatex --file-line-error --interaction=nonstopmode test.tex`)
rm("test.aux"; force=true)
end
@testset "Formatting" begin
is_formatted = JuliaFormatter.format(LaTeXDatax; overwrite)
@test is_formatted
if ~is_formatted
if overwrite
println("The package has now been formatted. Review the changes and commit.")
else
println(
"The package failed formatting check. Try `JuliaFormatter.format(LaTeXDatax)`",
)
end
end
end
| LaTeXDatax | https://github.com/Datax-package/LaTeXDatax.jl.git |
|
[
"MIT"
] | 3.0.2 | f7736d86d7e8e80be3ac9685380e83e17dba3b81 | docs | 666 | # LaTeXDatax
Save specified variables to a data file to be read into a LaTeX document using
the accompanying package `datax.sty`.
## Installation
`using Pkg;Pkg.add("LaTeXDatax")`, and install [the datax package
[ctan]](https://ctan.org/tex-archive/macros/latex/contrib/datax).
## Usage
```julia
using LaTeXDatax, Unitful
a = 25;
@datax a b=3a c=3e8u"m/s" d="Raw string" filename:="data.tex"
```
```latex
\documentclass{article}
\usepackage{siunitx}
\usepackage[dataxfile=data.tex]{datax}
\begin{document}
The speed of light is \datax{c}.
\end{document}
```
More detailed usage information is in the docstrings of the code, run `?@datax`
in REPL to read them.
| LaTeXDatax | https://github.com/Datax-package/LaTeXDatax.jl.git |
|
[
"Apache-2.0"
] | 0.1.2 | 1282e7eb111aed8b74f0bf5b9b7a27fa1424088e | code | 492 | using SententialDecisionDiagrams
const SDD = SententialDecisionDiagrams
var_count = 4
var_order = [2,1,4,3]
vtree_type = "balanced"
vtree = SDD.vtree(var_count, var_order, vtree_type)
manager = SDD.sdd_manager(vtree)
println("constructing SDD ... ")
a, b, c, d = [SDD.literal(i,manager) for i in 1:4]
α = (a & b) | (b & c) | (c & d)
println("done")
println("saving sdd and vtree ... ")
SDD.dot("$(@__DIR__)/output/sdd.dot",α)
SDD.dot("$(@__DIR__)/output/vtree.dot",vtree)
println("done")
| SententialDecisionDiagrams | https://github.com/pedrozudo/SententialDecisionDiagrams.jl.git |
|
[
"Apache-2.0"
] | 0.1.2 | 1282e7eb111aed8b74f0bf5b9b7a27fa1424088e | code | 1236 | using SententialDecisionDiagrams
const SDD = SententialDecisionDiagrams
# set up vtree and manager
var_count = 4
vtree_type = "right"
vtree = SDD.vtree(var_count, vtree_type)
manager = SDD.sdd_manager(vtree)
x = [SDD.literal(i,manager) for i in 1:5]
# construct the term X_1 ^ X_2 ^ X_3 ^ X_4
α = x[1] & x[2] & x[3] & x[4]
# construct the term ~X_1 ^ X_2 ^ X_3 ^ X_4
β = ~x[1] & x[2] & x[3] & x[4]
# construct the term ~X_1 ^ ~X_2 ^ X_3 ^ X_4
γ = ~x[1] & ~x[2] & x[3] & x[4]
println("before referencing:")
println("live sdd size = $(SDD.live_size(manager))")
println("dead sdd size = $(SDD.dead_size(manager))")
# ref SDDs so that they are not garbage collected
SDD.ref(α)
SDD.ref(β)
SDD.ref(γ)
println("after referencing:")
println("live sdd size = $(SDD.live_size(manager))")
println("dead sdd size = $(SDD.dead_size(manager))")
# garbage collect
SDD.garbage_collect(manager)
println("after garbage collection:");
println("live sdd size = $(SDD.live_size(manager))")
println("dead sdd size = $(SDD.dead_size(manager))")
SDD.deref(α)
SDD.deref(β)
SDD.deref(γ)
println("saving vtree & shared sdd ...")
SDD.dot("$(@__DIR__)/output/shared-vtree.dot", vtree)
SDD.shared_save_as_dot("$(@__DIR__)/output/shared.dot", manager)
| SententialDecisionDiagrams | https://github.com/pedrozudo/SententialDecisionDiagrams.jl.git |
|
[
"Apache-2.0"
] | 0.1.2 | 1282e7eb111aed8b74f0bf5b9b7a27fa1424088e | code | 896 | using SententialDecisionDiagrams
const SDD = SententialDecisionDiagrams
# set up vtree and manager
vtree = SDD.read_vtree("$(@__DIR__)/input/opt-swap.vtree")
manager = SDD.sdd_manager(vtree)
println("reading sdd from file ...")
α = SDD.read_sdd("$(@__DIR__)/input/opt-swap.sdd", manager)
println("sdd size = $(SDD.size(α))")
# ref, perform the minimization, and then de-ref
SDD.ref(α)
println("minimizing sdd size ... ")
SDD.minimize(manager) # see also SDD.minimize(m,limited=true)
println("done!")
println("sdd size = $(SDD.size(α))")
SDD.deref(α)
# augment the SDD
println("augmenting sdd ...")
β = α & (SDD.literal(4,manager) | SDD.literal(5,manager))
println("sdd size = $(SDD.size(β))")
# ref, perform the minimization again on new SDD, and then de-ref
SDD.ref(β)
println("minimizing sdd ... ")
SDD.minimize(manager)
println("done!")
println("sdd size = $(SDD.size(β))")
SDD.deref(β)
| SententialDecisionDiagrams | https://github.com/pedrozudo/SententialDecisionDiagrams.jl.git |
|
[
"Apache-2.0"
] | 0.1.2 | 1282e7eb111aed8b74f0bf5b9b7a27fa1424088e | code | 1322 | using SDD
const SDD = SententialDecisionDiagrams
# set up vtree and manager
vtree = SDD.read_vtree("$(@__DIR__)/input/rotate-left.vtree")
manager = SDD.sdd_manager(vtree)
# construct the term X_1 ^ X_2 ^ X_3 ^ X_4
x = [SDD.literal(i,manager) for i in 1:5]
α = x[1] & x[2] & x[3] & x[4]
# to perform a rotate, we need the manager's vtree
manager_vtree = SDD.vtree(manager)
manager_vtree_right = SDD.right(manager_vtree)
println("saving vtree & sdd ...")
SDD.dot("$(@__DIR__)/output/before-rotate-vtree.dot", α)
# ref alpha (so it is not gc'd)
SDD.ref(α)
# garbage collect (no dead nodes when performing vtree operations)
println("dead sdd nodes = $(SDD.dead_count(manager))")
println("garbage collection ...")
SDD.garbage_collect(manager)
println("dead sdd nodes = $(SDD.dead_count(manager))")
println("left rotating ... ")
succeeded = SDD.rotate_left(manager_vtree_right,manager,0)
if succeeded == 1; println("succeeded!") else println("did not succeed!") end
# deref alpha, since ref's are no longer needed
SDD.deref(α)
# the root changed after rotation, so get the manager's vtree again
# this time using root_location
manager_vtree = SDD.vtree(manager)
println("saving vtree & sdd ...")
SDD.dot("$(@__DIR__)/output/after-rotate-vtree.dot", manager_vtree)
SDD.dot("$(@__DIR__)/output/after-rotate-sdd.dot", α)
| SententialDecisionDiagrams | https://github.com/pedrozudo/SententialDecisionDiagrams.jl.git |
|
[
"Apache-2.0"
] | 0.1.2 | 1282e7eb111aed8b74f0bf5b9b7a27fa1424088e | code | 1895 | using SententialDecisionDiagrams
const SDD = SententialDecisionDiagrams
# set up vtree and manager
vtree = SDD.read_vtree("$(@__DIR__)/input/big-swap.vtree")
manager = SDD.sdd_manager(vtree)
println("reading sdd from file ...")
α = SDD.read_sdd("$(@__DIR__)/input/big-swap.sdd", manager)
println("sdd size = $(SDD.size(α))")
# to perform a swap, we need the manager's vtree
manager_vtree = SDD.vtree(manager)
# ref alpha (no dead nodes when swapping)
SDD.ref(α)
#
# # using size of sdd normalized for manager_vtree as baseline for limit
SDD.init_vtree_size_limit(manager_vtree, manager)
#
limit = 2.0
SDD.set_vtree_operation_size_limit(limit, manager)
println("modifying vtree (swap node 7) (limit growth by $(limit)x) ... ")
succeeded = SDD.swap(manager_vtree, manager, 1)
if succeeded == 1; println("succeeded!") else println("did not succeed!") end
println("sdd size = $(SDD.size(α))")
println("modifying vtree (swap node 7) (no limit) ... ")
succeeded = SDD.swap(manager_vtree, manager, 0)
if succeeded == 1; println("succeeded!") else println("did not succeed!") end
println("sdd size = $(SDD.size(α))")
println("updating baseline of size limit ...")
SDD.update_vtree_size_limit(manager)
left_vtree = SDD.left(manager_vtree)
limit = 1.2
SDD.set_vtree_operation_size_limit(limit, manager)
println("modifying vtree (swap node 5) (limit growth by $(limit)x) ... ")
succeeded = SDD.swap(left_vtree, manager, 1)
if succeeded == 1; println("succeeded!") else println("did not succeed!") end
println("sdd size = $(SDD.size(α))")
limit = 1.3
SDD.set_vtree_operation_size_limit(limit, manager)
println("modifying vtree (swap node 5) (limit growth by $(limit)x) ... ")
succeeded = SDD.swap(left_vtree, manager, 1)
if succeeded == 1; println("succeeded!") else println("did not succeed!") end
println("sdd size = $(SDD.size(α))")
# deref alpha, since ref's are no longer needed
SDD.deref(α)
| SententialDecisionDiagrams | https://github.com/pedrozudo/SententialDecisionDiagrams.jl.git |
|
[
"Apache-2.0"
] | 0.1.2 | 1282e7eb111aed8b74f0bf5b9b7a27fa1424088e | code | 1342 | using SententialDecisionDiagrams
# set up vtree and manager
vtree = SententialDecisionDiagrams.read_vtree("$(@__DIR__)/input/simple.vtree")
sdd = SententialDecisionDiagrams.sdd_manager(vtree)
println("Created an SententialDecisionDiagrams with $(SententialDecisionDiagrams.var_count(sdd)) variables")
root = SententialDecisionDiagrams.read_cnf("$(@__DIR__)/input/simple.cnf", sdd, compiler_options=SententialDecisionDiagrams.CompilerOptions(vtree_search_mode=-1))
# For DNF functions use `read_dnf_file`
# Model Counting
wmc = SententialDecisionDiagrams.wmc_manager(root, log_mode=true)
w = SententialDecisionDiagrams.propagate(wmc)
println("Model count: $(convert(Int32,exp(w)))")
# Weighted Model Counting
lits = [SententialDecisionDiagrams.literal(i,sdd) for i in 1:SententialDecisionDiagrams.var_count(sdd)]
# Positive literal weight
SententialDecisionDiagrams.set_literal_weight(lits[1], log(0.5), wmc)
# Negative literal weight
SententialDecisionDiagrams.set_literal_weight(~lits[1], log(0.5), wmc)
w = SententialDecisionDiagrams.propagate(wmc)
println("Weighted model count: $(exp(w))")
# Visualize SententialDecisionDiagrams and VTREE
println("saving sdd and vtree ... ")
SententialDecisionDiagrams.dot("$(@__DIR__)/output/simple-vtree.dot", vtree)
SententialDecisionDiagrams.dot("$(@__DIR__)/output/simple-sdd-cnf.dot", root)
| SententialDecisionDiagrams | https://github.com/pedrozudo/SententialDecisionDiagrams.jl.git |
|
[
"Apache-2.0"
] | 0.1.2 | 1282e7eb111aed8b74f0bf5b9b7a27fa1424088e | code | 20189 | #TODO add bangs for functions that modify argument(s)
module SententialDecisionDiagrams
using Parameters
include("sddapi.jl")
include("fnf.jl")
using .SddLibrary
struct VTree
vtree::Ptr{SddLibrary.VTree_c}
end
struct SddManager
manager::Ptr{SddLibrary.SddManager_c}
end
struct SddNode
node::Ptr{SddLibrary.SddNode_c}
manager::Ptr{SddLibrary.SddManager_c}
end
struct PrimeSub
prime::SddNode
sub::SddNode
end
struct WmcManager
manager::Ptr{SddLibrary.WmcManager_c}
end
function str_to_char(vtree_type::String)::Ptr{UInt8}
return pointer(vtree_type)
end
# SDD MANAGER FUNCTIONS
function sdd_manager(vtree::VTree)::SddManager
manager = SddLibrary.sdd_manager_new(vtree.vtree)
return SddManager(manager)
end
function sdd_manager(var_count::Integer, auto_gc_and_minimize::Bool)::SddManager
manager = SddLibrary.sdd_manager_create(convert(SddLibrary.SddLiteral, var_count), auto_gc_and_minimize)
return SddManager(manager)
end
# TODO function manager_new
function free(manager::SddManager)
SddLibrary.sdd_manager_free(manager.manager)
end
function Base.print(manager::SddManager)
SddLibrary.sdd_manager_print(manager.manager)
end
function auto_gc_and_minimize_on(manager::SddManager)
SddLibrary.sdd_manager_auto_gc_and_minimize_on(manager.manager)
end
function sdd_manager_auto_gc_and_minimize_off(manager::SddManager)
SddLibrary.sdd_manager_auto_gc_and_minimize_off(manager.manager)
end
function is_auto_gc_and_minimize_on(manager::SddManager)::Bool
return convert(Bool, SddLibrary.sdd_manager_is_auto_gc_and_minimize_on(manager.manager))
end
# TODO void sdd_manager_set_minimize_function
function unset_minimize_function(manager::SddManager)
SddLI.sdd_manager_unset_minimize_function(manager.manager)
end
function options(manager::SddManager)
SddLI.sdd_manager_options(manager.manager)
end
# TODO void sdd_manager_set_options(void* options, SddManager* manager);
function is_var_used(var::Integer, manager::SddManager)::Bool
return convert(Bool, SddLibrary.sdd_manager_is_var_used(convert(SddLibrary.SddLiteral, var),manager.manager))
end
function vtree_of_var(var::Integer, manager::SddManager)::VTree
vtree = SddLibrary.sdd_manager_vtree_of_var(convert(SddLibrary.SddLiteral, var), manager.manager)
return VTree(vtree)
end
# TODO Vtree* sdd_manager_lca_of_literals(int count, SddLiteral* literals, SddManager* manager);
function var_count(manager::SddManager)::SddLibrary.SddLiteral
return SddLibrary.sdd_manager_var_count(manager.manager)
end
# TODO void sdd_manager_var_order(SddLiteral* var_order, SddManager *manager);
function add_var_before_first(manager::SddManager)
SddLibrary.sdd_manager_add_var_before_first(manager.manager)
end
function add_var_after_last(manager::SddManager)
SddLibrary.sdd_manager_add_var_after_last(manager.manager)
end
function add_var_before(manager::SddManager)
SddLibrary.sdd_manager_add_var_before(manager.manager)
end
function add_var_after(manager::SddManager)
SddLibrary.sdd_manager_add_var_after(manager.manager)
end
# TERMINAL SDDS
function literal(tf::Bool, manager::SddManager)::SddNode
if tf
node = SddLibrary.sdd_manager_false(manager.manager)
elseif !tf
node = SddLibrary.sdd_manager_true(manager.manager)
end
return SddNode(node, manager.manager)
end
function literal(literal::Integer, manager::SddManager)::SddNode
node = SddLibrary.sdd_manager_literal(convert(SddLibrary.SddLiteral, literal), manager.manager)
return SddNode(node, manager.manager)
end
# SDD QUERIES AND TRANSFORMATIONS
function apply(node1::SddNode, node2::SddNode, op::Bool, manager::SddManager)::SddNode
node = SddLibrary.sdd_apply(node1.node, node2.node, convert(SddLibrary.SddBoolOp, op), manager.manager)
return SddNode(node, node1.manager)
end
function conjoin(node1::SddNode, node2::SddNode, manager::SddManager)::SddNode
node = SddLibrary.sdd_conjoin(node1.node, node2.node, manager.manager)
return SddNode(node, manager.manager)
end
function disjoin(node1::SddNode, node2::SddNode, manager::SddManager)::SddNode
node = SddLibrary.sdd_disjoin(node1.node, node2.node, manager.manager)
return SddNode(node, manager.manager)
end
function negate(node::SddNode, manager::SddManager)::SddNode
node = SddLibrary.sdd_negate(node.node, node.manager)
return SddNode(node, manager.manager)
end
function condition(lit::Integer, node::SddNode, manager::SddManager)::SddNode
node = SddLibrary.sdd_condition(convert(SddLibrary.SddLiteral,lit), node.node, manager.manager)
return SddNode(node, manager.manager)
end
function exists(lit::Integer, node::SddNode, manager::SddManager)::SddNode
node = SddLibrary.sdd_exists(convert(SddLibrary.SddLiteral,lit), node.node, manager.manager)
return SddNode(node, manager.manager)
end
function exists_multiple(exists_map::Array{Integer,1}, node::SddNode, manager::SddManager; static::Bool=false)::SddNode
if !static
node = SddLibrary.sdd_exists(convert(Array{Cint,1},exists_map), node.node, manager.manager)
else
node = SddLibrary.sdd_exists_multiple_static(convert(Array{Cint,1},exists_map), node.node, manager.manager)
end
return SddNode(node, manager.manager)
end
function for_all(lit::Integer, node::SddNode, manager::SddManager)::SddNode
node = SddLibrary.sdd_forall(convert(SddLibrary.SddLiteral,lit), node.node, manager.manager)
return SddNode(node, manager.manager)
end
function minimize_cardinality(node::SddNode, manager::SddManager; globally::Bool=false)::SddNode
if !globally
node = SddLibrary.sdd_minimize_cardinality(node.node, manager.manager)
else
node = SddLibrary.sdd_global_minimize_cardinality(node.node, manager.manager)
end
return SddNode(node, manager.manager)
end
function minimum_cardinality(node::SddNode)::SddLibrary.SddLiteral
return SddLibrary.sdd_minimum_cardinality(node.node)
end
function model_count(node::SddNode, manager::SddManager; globally::Bool=false)::SddLibrary.SddModelCount
if !globally
return SddLibrary.sdd_model_count(node.node, manager.manager)
else
return SddLibrary.sdd_global_model_count(node.node, manager.manager)
end
end
# SDD NAVIGATION
function is_true(node::SddNode)::Bool
res = SddLibrary.sdd_node_is_true(node.node)
return convert(Bool, res)
end
function is_false(node::SddNode)::Bool
res = SddLibrary.sdd_node_is_false(node.node)
return convert(Bool, res)
end
function is_literal(node::SddNode)::Bool
res = SddLibrary.sdd_node_is_literal(node.node)
return convert(Bool, res)
end
function is_decision(node::SddNode)::Bool
res = SddLibrary.sdd_node_is_decision(node.node)
return convert(Bool, res)
end
function node_size(node::SddNode)::SddLibrary.SddNodeSize
return SddLibrary.sdd_node_size(node.node)
end
function literal(node::SddNode)::SddLibrary.SddLiteral
return SddLibrary.sdd_node_literal(node.node)
end
function elements(node::SddNode)::Array{PrimeSub,1}
# TODO make abstract array for primesubs and avoid copying
elements_ptr = SddLibrary.sdd_node_elements(node.node)
m = size(node)
primesubs = PrimeSub[]
for i in 0:m-1
e = unsafe_load(elements_ptr+i)
p = SddNode(e, node.manager)
s = SddNode(e+1, node.manager)
push!(primesubs, PrimeSub(p, s))
end
return primesubs
end
function set_bit(bit::Int32, node::SddNode)
SddLibrary.sdd_node_set_bit(bit, node.node)
end
function bit(node::SddNode)::Int32
return SddLibrary.sdd_node_bit(node.node)
end
#
# # SDD FUNCTIONS
# SDD FILE I/O
# TODO make this safer
function read_sdd(filename::String, manager::SddManager)::SddNode
node = SddLibrary.sdd_read(str_to_char(filename), manager.manager)
return SddNode(node, manager.manager)
end
function save(filename::String, node::SddNode)
SddLibrary.sdd_save(str_to_char(filename), node.node)
end
function save_as_dot(filename::String, node::SddNode)
SddLibrary.sdd_save_as_dot(str_to_char(filename), node.node)
end
function shared_save_as_dot(filename::String, manager::SddManager)
SddLibrary.sdd_shared_save_as_dot(str_to_char(filename), manager.manager)
end
# SDD SIZE AND NODE COUNT
# SDD
function count(node::SddNode)::SddLibrary.SddSize
return SddLibrary.sdd_count(node.node)
end
function size(node::SddNode)::SddLibrary.SddSize
return SddLibrary.sdd_size(node.node)
end
# SDD OF MANAGER
manager_size_fnames_c = [
"size", "live_size", "dead_size",
"count", "live_count", "dead_count"
]
for (fnj,fnc) in zip(manager_size_fnames_c, SddLibrary.manager_size_fnames_c)
@eval begin
function $(Symbol(fnj))(manager::SddManager)::SddLibrary.SddSize
return ((SddLibrary).$(Symbol(fnc)))(manager.manager)
end
end
end
# SDD SIZE OF VTREE
vtree_size_fnames_j = [
"size", "live_size", "dead_size",
"size_at", "live_size_at", "dead_size_at",
"size_above", "live_size_above", "dead_size_above",
"count", "live_count", "dead_count",
"count_at", "live_count_at", "dead_count_at",
"count_above", "live_count_above", "dead_count_above"
]
for (fnj,fnc) in zip(vtree_size_fnames_j, SddLibrary.vtree_size_fnames_c)
@eval begin
function $(Symbol(fnj))(vtree::VTree)::SddLibrary.SddSize
return ((SddLibrary).$(Symbol(fnc)))(vtree.vtree)
end
end
end
# CREATING VTREES
function vtree(var_count::Integer, vtree_type::String)::VTree
vtree = SddLibrary.sdd_vtree_new(convert(SddLibrary.SddLiteral,var_count), str_to_char(vtree_type))
return VTree(vtree)
end
function vtree(var_count::Integer, order::Array{<:Integer,1}, vtree_type::String; order_type::String="var_order")::VTree
@assert order_type in Set(["var_order", "is_X_var"]) "$order_type not in a valid order type (var_order, is_X_var]"
if order_type=="var_order"
vtree = SddLibrary.sdd_vtree_new_with_var_order(convert(SddLibrary.SddLiteral,var_count), convert(Array{SddLibrary.SddLiteral,1},order), str_to_char(vtree_type))
elseif order_type=="is_X_var"
vtree = SddLibrary.sdd_vtree_new_X_constrained(convert(SddLibrary.SddLiteral,var_count), convert(Array{SddLibrary.SddLiteral,1},order), str_to_char(vtree_type))
end
return VTree(vtree)
end
function free(vtree::VTree)
SddLibrary.sdd_vtree_free(vtree.vtree)
end
# VTREE FILE I/O
# TODO make this safer
function read_vtree(filename::String)::VTree
vtree = SddLibrary.sdd_vtree_read(str_to_char(filename))
return VTree(vtree)
end
function save(filename::String, vtree::VTree)
SddLibrary.sdd_vtree_save(str_to_char(filename), vtree.vtree)
end
function save_as_dot(filename::String, vtree::VTree)
SddLibrary.sdd_vtree_save_as_dot(str_to_char(filename), vtree.vtree)
end
# SDD MANAGER VTREE
function vtree(manager::SddManager; copy::Bool=false)::VTree
if !copy
vtree = SddLibrary.sdd_manager_vtree(manager.manager)
return VTree(vtree)
else
vtree = SddLibrary.sdd_manager_vtree_copy(manager.manager)
return VTree(vtree)
end
end
# VTREE NAVIGATION
function left(vtree::VTree)::VTree
vtree = SddLibrary.sdd_vtree_left(vtree.vtree)
return VTree(vtree)
end
function right(vtree::VTree)::VTree
vtree = SddLibrary.sdd_vtree_right(vtree.vtree)
return VTree(vtree)
end
function parent(vtree::VTree)::VTree
vtree = SddLibrary.sdd_vtree_parent(vtree.vtree)
return VTree(vtree)
end
# VTREE FUNCTIONS
function is_leaf(vtree::VTree)::Bool
return convert(Bool, SddLibrary.sdd_vtree_is_leaf(vtree.vtree))
end
function is_sub(vtree1::VTree, vtree2::VTree)::Bool
return convert(Bool, SddLibrary.sdd_vtree_is_sub(vtree1.vtree, vtree2.vtree))
end
function lca(vtree1::VTree, vtree2::VTree)::VTree
vtree = SddLibrary.sdd_vtree_lca(vtree1.vtree, vtree2.vtree)
return VTree(vtree)
end
function var_count(vtree::VTree)::SddLibrary.SddLiteral
return SddLibrary.sdd_vtree_var_count(vtree.vtree)
end
function var(vtree::VTree)::SddLibrary.SddLiteral
return SddLibrary.sdd_vtree_var(vtree.vtree)
end
function position(vtree::VTree)::SddLibrary.SddLiteral
return SddLibrary.sdd_vtree_position(vtree.vtree)
end
# Vtree** sdd_vtree_location(Vtree* vtree, SddManager* manager);
# VTREE/SDD EDIT OPERATIONS
function rotate_left(vtree::VTree, manager::SddManager, limited::Union{Bool,Integer})::Bool
return convert(Bool, SddLibrary.sdd_vtree_rotate_left(vtree.vtree, manager.manager, convert(Cint, limited)))
end
function rotate_right(vtree::VTree, manager::SddManager, limited::Union{Bool,Integer})::Bool
return convert(Bool, SddLibrary.sdd_vtree_rotate_right(vtree.vtree, manager.manager, convert(Cint, limited)))
end
function swap(vtree::VTree, manager::SddManager, limited::Union{Bool,Integer})::Bool
return convert(Bool, SddLibrary.sdd_vtree_swap(vtree.vtree, manager.manager, convert(Cint, limited)))
end
# LIMITS FOR VTREE/SDD EDIT OPERATIONS
function init_vtree_size_limit(vtree::VTree, manager::SddManager)
SddLibrary.sdd_manager_init_vtree_size_limit(vtree.vtree, manager.manager)
end
function update_vtree_size_limit(manager::SddManager)
SddLibrary.sdd_manager_update_vtree_size_limit(manager.manager)
end
# # VTREE STATE
# GARBAGE COLLECTION
function ref_count(node::SddNode)::SddLibrary.SddRefCount
return SddLibrary.sdd_ref_count(node.node)
end
function ref(node::SddNode, manager::SddManager)::SddNode
ref_node = SddLibrary.sdd_ref(node.node, manager.manager)
return SddNode(ref_node, manager.manager)
end
function deref(node::SddNode, manager::SddManager)::SddNode
ref_node = SddLibrary.sdd_deref(node.node, manager.manager)
return SddNode(ref_node, manager.manager)
end
function garbage_collect(manager::SddManager)
SddLibrary.sdd_manager_garbage_collect(manager.manager)
end
function garbage_collect(vtree::VTree, manager::SddManager)
SddLibrary.sdd_manager_garbage_collect(vtree.vtree, manager.manager)
end
function garbage_collect_if(dead_node_threshold::Real, manager::SddManager)::Int32
return SddLibrary.sdd_manager_garbage_collect_if(convert(Float32, dead_node_threshold), manager.manager)
end
function garbage_collect_if(dead_node_threshold::Real, vtree::VTree, manager::SddManager)::Int32
return SddLibrary.sdd_manager_garbage_collect_if(convert(Float32, dead_node_threshold), vtree.vtree, manager.manager)
end
# MINIMIZATION
function minimize(manager::SddManager; limited::Bool=false)
if !limited
SddLibrary.sdd_manager_minimize(manager.manager)
else
SddLibrary.sdd_manager_minimize_limited(manager.manager)
end
end
function minimize(vtree::VTree, manager::SddManager; limited::Bool=false)::VTree
if !limited
vtree = SddLibrary.sdd_vtree_minimize(vtree.vtree, manager.manager)
else
vtree = SddLibrary.sdd_vtree_minimize_limited(vtree.vtree, manager.manager)
end
return VTree(vtree)
end
function set_vtree_search_convergence_threshold(threshold::Real, manager::SddManager)
SddLibrary.sdd_manager_set_vtree_search_convergence_threshold(convert(Float32, threshold), manager.manager)
end
function set_vtree_search_time_limit(time_limit::Real, manager::SddManager)
SddLibrary.sdd_manager_set_vtree_search_time_limit(convert(Float32, time_limit), manager.manager)
end
function set_vtree_fragment_time_limit(time_limit::Real, manager::SddManager)
SddLibrary.sdd_manager_set_vtree_fragment_time_limit(convert(Float32, time_limit), manager.manager)
end
function set_vtree_operation_time_limit(time_limit::Real, manager::SddManager)
SddLibrary.sdd_manager_set_vtree_operation_time_limit(convert(Float32, time_limit), manager.manager)
end
function set_vtree_apply_time_limit(time_limit::Real, manager::SddManager)
SddLibrary.sdd_manager_set_vtree_apply_time_limit(convert(Float32, time_limit), manager.manager)
end
function set_vtree_operation_memory_limit(memory_limit::Real, manager::SddManager)
SddLibrary.sdd_manager_set_vtree_operation_memory_limit(convert(Float32, memory_limit), manager.manager)
end
function set_vtree_operation_size_limit(size_limit::Real, manager::SddManager)
SddLibrary.sdd_manager_set_vtree_operation_size_limit(convert(Float32, size_limit), manager.manager)
end
function set_vtree_cartesian_product_limit(size_limit::Real, manager::SddManager)
SddLibrary.sdd_manager_set_vtree_cartesian_product_limit(convert(Float32, size_limit), manager.manager)
end
# WMC
function wmc_manager(node::SddNode, log_mode::Bool, manager::SddManager)::WmcManager
wmc = SddLibrary.wmc_manager_new(node.node, convert(Cint, log_mode), manager.manager)
return WmcManager(wmc)
end
function free(manager::WmcManager)
SddLibrary.wmc_manager_free(manager.manager)
end
function set_literal_weight(node::SddNode, weight::Real, manager::WmcManager)
literal = SddLibrary.sdd_node_literal(node.node)
SddLibrary.wmc_set_literal_weight(literal, convert(SddLibrary.SddWmc, weight), manager.manager)
end
function propagate(manager::WmcManager)::SddLibrary.SddWmc
return SddLibrary.wmc_propagate(manager.manager)
end
function zero(manager::WmcManager)::SddLibrary.SddWmc
return SddLibrary.wmc_zero_weight(manager.manager)
end
function one(manager::WmcManager)::SddLibrary.SddWmc
return SddLibrary.wmc_one_weight(manager.manager)
end
function weight(literal::Integer, manager::WmcManager)::SddLibrary.SddWmc
return SddLibrary.wmc_literal_weight(convert(SddLibrary.SddLiteral, literal), manager.manager)
end
function derivative(literal::Integer, manager::WmcManager)::SddLibrary.SddWmc
return SddLibrary.wmc_literal_derivative(convert(SddLibrary.SddLiteral, literal), manager.manager)
end
function probability(literal::Integer, manager::WmcManager)::SddLibrary.SddWmc
return SddLibrary.wmc_literal_pr(convert(SddLibrary.SddLiteral, literal), manager.manager)
end
# CONVENIENCE METHODS
function conjoin(node1::SddNode, node2::SddNode)::SddNode
node = SddLibrary.sdd_conjoin(node1.node, node2.node, node1.manager)
return SddNode(node, node1.manager)
end
function disjoin(node1::SddNode, node2::SddNode)::SddNode
node = SddLibrary.sdd_disjoin(node1.node, node2.node, node1.manager)
return SddNode(node, node1.manager)
end
function negate(node::SddNode)::SddNode
nodeptr = SddLibrary.sdd_negate(node.node, node.manager)
return SddNode(nodeptr, node.manager)
end
function equiv(left::SddNode, right::SddNode)::SddNode
return (!left | right) & (left | !right)
end
function model_count(node::SddNode; globally::Bool=false)::SddLibrary.SddModelCount
if !globally
return SddLibrary.sdd_model_count(node.node, node.manager)
else
return SddLibrary.sdd_global_model_count(node.node, node.manager)
end
end
function ref(node::SddNode)::SddNode
ref_node = SddLibrary.sdd_ref(node.node, node.manager)
return SddNode(ref_node, node.manager)
end
function deref(node::SddNode)::SddNode
ref_node = SddLibrary.sdd_deref(node.node, node.manager)
return SddNode(ref_node, node.manager)
end
function dot(filename::String, structure::Union{VTree,SddNode})
save_as_dot(filename, structure)
end
function wmc_manager(node::SddNode; log_mode::Bool=true)::WmcManager
wmc = SddLibrary.wmc_manager_new(node.node, convert(Cint, log_mode), node.manager)
return WmcManager(wmc)
end
Base.:&(node1::SddNode, node2::SddNode) = conjoin(node1,node2)
Base.:|(node1::SddNode, node2::SddNode) = disjoin(node1,node2)
Base.:~(node::SddNode) = negate(node)
↔(left::SddNode, right::SddNode) = equiv(left,right)
# FNF methods
@with_kw struct CompilerOptions
vtree_search_mode::Int32 = -1
post_search::Bool = false
verbose::Bool = false
end
function read_cnf(filename::String, manager::SddManager; compiler_options=CompilerOptions())::SddNode
cnf = read_cnf(filename)
sdd_node = fnf_to_sdd(cnf, manager.manager, compiler_options)
return SddNode(sdd_node, manager.manager)
end
function read_dnf(filename::String, manager::SddManager; compiler_options=CompilerOptions())::SddNode
dnf = read_dnf(filename)
sdd_node = fnf_to_sdd(dnf, manager.manager, compiler_options)
return SddNode(sdd_node, manager.manager)
end
export ↔
end
| SententialDecisionDiagrams | https://github.com/pedrozudo/SententialDecisionDiagrams.jl.git |
|
[
"Apache-2.0"
] | 0.1.2 | 1282e7eb111aed8b74f0bf5b9b7a27fa1424088e | code | 5927 | # structures
mutable struct LitSet
id::SddLibrary.SddSize
literal_count::SddLibrary.SddLiteral
literals::Array{SddLibrary.SddLiteral}
op::SddLibrary.BoolOp
vtree::Ptr{SddLibrary.VTree_c}
litset_bit::UInt8
LitSet() = new()
end
mutable struct Fnf
var_count::SddLibrary.SddLiteral
litset_count::SddLibrary.SddSize
litsets::Array{LitSet}
op::SddLibrary.BoolOp
end
# i/o
function read_cnf(filename::String)::Fnf
return parse_fnf(filename, convert(SddLibrary.BoolOp,0))
end
function read_dnf(filename::String)::Fnf
return parse_fnf(filename, convert(SddLibrary.BoolOp,1))
end
function parse_fnf(filename::String, op::SddLibrary.BoolOp)::Fnf
f = open(filename)
lines = readlines(f)
close(f)
id::SddLibrary.SddSize = 0
var_count::SddLibrary.SddLiteral = 0
litset_count::SddLibrary.SddSize = 0
n_extra_lines = 0
for l in lines
l_split = split(l)
n_extra_lines += 1
if l_split[1]=="c"
continue
elseif l_split[1]=="p"
# if op==0 @assert(l_split[2]=="cnf") else @assert(l_split[2]=="dnf") end
@assert(l_split[2]=="cnf")
var_count = parse(SddLibrary.SddLiteral, l_split[3])
litset_count = parse(SddLibrary.SddSize, l_split[4])
break
end
end
litsets = Array{LitSet}(undef, litset_count)
for c in 1:litset_count
id += 1
terms = split(lines[c+n_extra_lines])
literals = Array{SddLibrary.SddLiteral}(undef, 2var_count)
for i in 1:length(terms)
if terms[i]== "0" break end
literals[i] = parse(SddLibrary.SddLiteral,terms[i])
#TODO add test if i>2varcount raise
end
clause = LitSet()
clause.id = id
clause.literal_count = length(terms)-1
clause.op = convert(SddLibrary.BoolOp, 1-op)
clause.literals = literals
clause.litset_bit = 0
litsets[c] = clause
end
return Fnf(var_count, litset_count, litsets, op)
end
# compiling
ZERO(M,OP) = (OP==SddLibrary.CONJOIN ? SddLibrary.sdd_manager_false(M) : SddLibrary.sdd_manager_true(M))
ONE(M,OP) = (OP==SddLibrary.CONJOIN ? SddLibrary.sdd_manager_true(M) : SddLibrary.sdd_manager_false(M))
function fnf_to_sdd(fnf::Fnf, manager::Ptr{SddLibrary.SddManager_c}, options)::Ptr{SddLibrary.SddNode_c}
# degenarate fnf
if fnf.litset_count==0 return ONE(manager,fnf.op) end
for ls in fnf.litsets
if ls.literal_count==0
return ZERO(manager,fnf.op)
end
end
# non-degenarate fnf
if options.vtree_search_mode<0
SddLibrary.sdd_manager_auto_gc_and_minimize_on(manager)
return fnf_to_sdd_auto(fnf, manager, options)
else
SddLibrary.sdd_manager_auto_gc_and_minimize_off(manager)
return fnf_to_sdd_manual(fnf, manager, options)
end
end
function fnf_to_sdd_auto(fnf::Fnf, manager::Ptr{SddLibrary.SddManager_c}, options)::Ptr{SddLibrary.SddNode_c}
# TODO verbose print stuff
verbose = options.verbose
op = fnf.op
node = ONE(manager,fnf.op)
count = fnf.litset_count
litsets = view(fnf.litsets,:)
# need to convert count to Int64 other sort! does not work
for i in 1:convert(Int64,count)
litsets[i:count] = sort_litsets_by_lca(view(litsets,i:convert(Int64,count)), manager)
SddLibrary.sdd_ref(node, manager)
l = apply_litset(litsets[i], manager)
SddLibrary.sdd_deref(node, manager)
node = SddLibrary.sdd_apply(l,node,op,manager)
end
return node
end
function fnf_to_sdd_manual(fnf::Fnf, manager::Ptr{SddLibrary.SddManager_c}, options)::Ptr{SddLibrary.SddNode_c}
verbose = options.verbose
period = options.vtree_search_mode
op = fnf.op
count = fnf.litset_count
litsets = view(fnf.litsets,:)
node = ONE(manager, op)
# need to convert count to Int64 other sort! does not work
for i in 1:convert(Int64,count)
if (period>0) && (i>1) && ((i-1)%period==0)
SddLibrary.sdd_ref(node, manager)
SddLibrary.sdd_manager_minimize_limited(manager)
SddLibrary.sdd_deref(node, manager)
#TODO possible without copying?
sort_litsets_by_lca(view(litsets,i:convert(Int64,count)), manager)
end
l = apply_litset(litsets[i], manager)
node = SddLibrary.sdd_apply(l, node, op, manager)
end
return node
end
function apply_litset(litset::LitSet, manager::Ptr{SddLibrary.SddManager_c})::Ptr{SddLibrary.SddNode_c}
op = litset.op
literals = litset.literals
node = ONE(manager,op)
for i in 1:litset.literal_count
literal = SddLibrary.sdd_manager_literal(literals[i], manager)
node = SddLibrary.sdd_apply(node, literal, op, manager)
end
return node
end
# sorting
function sort_litsets_by_lca(litsets::SubArray{LitSet}, manager::Ptr{SddLibrary.SddManager_c})
for ls in litsets
ls.vtree = SddLibrary.sdd_manager_lca_of_literals(ls.literal_count, ls.literals, manager)
end
sort!(litsets)
end
function Base.isless(litset1, litset2)::Bool
vtree1 = litset1.vtree
vtree2 = litset2.vtree
p1 = SddLibrary.sdd_vtree_position(vtree1)
p2 = SddLibrary.sdd_vtree_position(vtree2)
sub12 = convert(Bool, SddLibrary.sdd_vtree_is_sub(vtree1,vtree2))
sub21 = convert(Bool, SddLibrary.sdd_vtree_is_sub(vtree2,vtree1))
if ((vtree1!=vtree2) && (sub21 || (!sub12 && (p1>p2)))) return true
elseif ((vtree1!=vtree2) && (sub12 || (!sub21 && (p1<p2)))) return false
else
l1 = litset1.literal_count
l2 = litset2.literal_count
if l1>l2 return true
elseif l1<l2 return false
else
id1 = litset.id
id2 = litset.id
if id1>id2 return true
elseif id1<id2 return false
else return false
end
end
end
end
| SententialDecisionDiagrams | https://github.com/pedrozudo/SententialDecisionDiagrams.jl.git |
|
[
"Apache-2.0"
] | 0.1.2 | 1282e7eb111aed8b74f0bf5b9b7a27fa1424088e | code | 22632 | module SddLibrary
@static if Sys.isunix()
@static if Sys.islinux()
const LIBSDD = "$(@__DIR__)/../deps/sdd-2.0/lib/Linux/libsdd"
elseif Sys.isapple()
const LIBSDD = "$(@__DIR__)/../deps/sdd-2.0/lib/Darwin/libsdd"
else
LoadError("sddapi.jl", 0, "Sdd library only available on Linux and Darwin")
end
else
LoadError("sddapi.jl", 0, "Sdd library only available on Linux and Darwin")
end
const SddSize = Csize_t
const SddNodeSize = Cuint
const SddRefCount = Cuint
const SddModelCount = Culonglong
const SddWmc = Cdouble
const SddLiteral = Clong
const SddID = SddSize
const BoolOp = Cushort
const CONJOIN = convert(BoolOp, 0)
const DISJOIN = convert(BoolOp, 1)
struct VTree_c end
struct SddNode_c end
struct SddManager_c end
struct WmcManager_c end
# SDD MANAGER FUNCTIONS
function sdd_manager_new(vtree::Ptr{VTree_c})::Ptr{SddManager_c}
return ccall((:sdd_manager_new, LIBSDD), Ptr{SddManager_c}, (Ptr{VTree_c},), vtree)
end
function sdd_manager_create(var_count::SddLiteral, auto_gc_and_minimize::Cint)::Ptr{SddManager_c}
return ccall((:sdd_manager_create, LIBSDD),Ptr{SddManager_c}, (SddLiteral,Cint), var_count, auto_gc_and_minimize)
end
# function sdd_manager_new(size::SddSize, nodes::Array{Ptr{SddNode_c}}, from_manager::Ptr{SddManager_c})::Ptr{SddManager_c}
# return ccall((:sdd_manager_copy, LIBSDD), Ptr{SddManager_c}, (SddSize,Array{Ptr{SddNode_c}},Ptr{SddManager_c}), size, nodes, from_manager)
# end
function sdd_manager_free(manager::Ptr{SddManager_c})
ccall((:sdd_manager_free, LIBSDD), Cvoid, (Ptr{SddManager_c},), manager)
end
function sdd_manager_print(manager::Ptr{SddManager_c})
ccall((:sdd_manager_print, LIBSDD), Cvoid, (Ptr{SddManager_c},), manager)
end
function sdd_manager_auto_gc_and_minimize_on(manager::Ptr{SddManager_c})
ccall((:sdd_manager_auto_gc_and_minimize_on, LIBSDD), Cvoid, (Ptr{SddManager_c},), manager)
end
function sdd_manager_auto_gc_and_minimize_off(manager::Ptr{SddManager_c})
ccall((:sdd_manager_auto_gc_and_minimize_off, LIBSDD), Cvoid, (Ptr{SddManager_c},), manager)
end
function sdd_manager_is_auto_gc_and_minimize_on(manager::Ptr{SddManager_c})::Cint
return ccall((:sdd_manager_is_auto_gc_and_minimize_on, LIBSDD), Cint, (Ptr{SddManager_c},), manager)
end
# TODO void sdd_manager_set_minimize_function(SddVtreeSearchFunc func, SddManager* manager);
function sdd_manager_unset_minimize_function(manager::Ptr{SddManager_c})
ccall((:sdd_manager_unset_minimize_function, LIBSDD), Cvoid, (Ptr{SddManager_c},), manager)
end
# function sdd_manager_options(manager::Ptr{SddManager_c})
# ccall((:sdd_manager_options, LIBSDD), Ptr{Cvoid}, (Ptr{SddManager_c},), manager)
# end
# void sdd_manager_set_options(void* options, SddManager* manager);
function sdd_manager_is_var_used(var::SddLiteral, manager::Ptr{SddManager_c})::Cint
return ccall((:sdd_manager_is_var_used, LIBSDD), Cint, (SddLiteral, Ptr{SddManager_c}), var, manager)
end
function sdd_manager_vtree_of_var(var::SddLiteral, manager::Ptr{SddManager_c})::Ptr{VTree_c}
return ccall((:sdd_manager_vtree_of_var, LIBSDD), Ptr{VTree_c}, (SddLiteral, Ptr{SddManager_c}), var, manager)
end
function sdd_manager_lca_of_literals(count::SddLiteral, literals::Array{SddLiteral,1}, manager::Ptr{SddManager_c})::Ptr{VTree_c}
return ccall((:sdd_manager_lca_of_literals, LIBSDD), Ptr{VTree_c}, (Int32, Ptr{SddLiteral}, Ptr{SddManager_c}), count, literals, manager)
end
function sdd_manager_var_count(manager::Ptr{SddManager_c})::SddLiteral
return ccall((:sdd_manager_var_count, LIBSDD), SddLiteral, (Ptr{SddManager_c},), manager)
end
# TODO void sdd_manager_var_order(SddLiteral* var_order, SddManager *manager);
function sdd_manager_add_var_before_first(manager::Ptr{SddManager_c})
ccall((:sdd_manager_add_var_before_first, LIBSDD), Cvoid, (Ptr{SddManager_c},), manager)
end
function sdd_manager_add_var_after_last(manager::Ptr{SddManager_c})
ccall((:sdd_manager_add_var_after_last, LIBSDD), Cvoid, (Ptr{SddManager_c},), manager)
end
function sdd_manager_add_var_before(manager::Ptr{SddManager_c})
ccall((:sdd_manager_add_var_before, LIBSDD), Cvoid, (Ptr{SddManager_c},), manager)
end
function sdd_manager_add_var_after(manager::Ptr{SddManager_c})
ccall((:sdd_manager_add_var_after, LIBSDD), Cvoid, (Ptr{SddManager_c},), manager)
end
# TERMINAL SDDS
function sdd_manager_true(manager::Ptr{SddManager_c})::Ptr{SddNode_c}
return ccall((:sdd_manager_true, LIBSDD), Ptr{SddNode_c}, (Ptr{SddManager_c},), manager)
end
function sdd_manager_false(manager::Ptr{SddManager_c})::Ptr{SddNode_c}
return ccall((:sdd_manager_false, LIBSDD), Ptr{SddNode_c}, (Ptr{SddManager_c},), manager)
end
function sdd_manager_literal(literal::SddLiteral, manager::Ptr{SddManager_c})::Ptr{SddNode_c}
return ccall((:sdd_manager_literal, LIBSDD), Ptr{SddNode_c}, (SddLiteral, Ptr{SddManager_c}), literal, manager)
end
# SDD QUERIES AND TRANSFORMATIONS
function sdd_apply(node1::Ptr{SddNode_c}, node2::Ptr{SddNode_c}, op::BoolOp ,manager::Ptr{SddManager_c})::Ptr{SddNode_c}
return ccall((:sdd_apply, LIBSDD), Ptr{SddNode_c}, (Ptr{SddNode_c}, Ptr{SddNode_c}, BoolOp, Ptr{SddManager_c}), node1, node2, op, manager)
end
function sdd_conjoin(node1::Ptr{SddNode_c}, node2::Ptr{SddNode_c}, manager::Ptr{SddManager_c})::Ptr{SddNode_c}
return ccall((:sdd_conjoin, LIBSDD), Ptr{SddNode_c}, (Ptr{SddNode_c}, Ptr{SddNode_c}, Ptr{SddManager_c}), node1, node2, manager)
end
function sdd_disjoin(node1::Ptr{SddNode_c}, node2::Ptr{SddNode_c}, manager::Ptr{SddManager_c})::Ptr{SddNode_c}
return ccall((:sdd_disjoin, LIBSDD), Ptr{SddNode_c}, (Ptr{SddNode_c}, Ptr{SddNode_c}, Ptr{SddManager_c}), node1, node2, manager)
end
function sdd_negate(node::Ptr{SddNode_c}, manager::Ptr{SddManager_c})::Ptr{SddNode_c}
return ccall((:sdd_negate, LIBSDD), Ptr{SddNode_c}, (Ptr{SddNode_c}, Ptr{SddManager_c}), node, manager)
end
function sdd_condition(lit::SddLiteral, node::Ptr{SddNode_c}, manager::Ptr{SddManager_c})::Ptr{SddNode_c}
return ccall((:sdd_condition, LIBSDD), Ptr{SddNode_c}, (SddLiteral, Ptr{SddNode_c}, Ptr{SddManager_c}), lit, node, manager)
end
function sdd_exists(lit::SddLiteral, node::Ptr{SddNode_c}, manager::Ptr{SddManager_c})::Ptr{SddNode_c}
return ccall((:sdd_exists, LIBSDD), Ptr{SddNode_c}, (SddLiteral, Ptr{SddNode_c}, Ptr{SddManager_c}), lit, node, manager)
end
function sdd_exists_multiple(exists_map::Array{Cint,1}, node::Ptr{SddNode_c}, manager::Ptr{SddManager_c})::Ptr{SddNode_c}
return ccall((:sdd_exists_multiple, LIBSDD), Ptr{SddNode_c}, (Array{Cint,1}, Ptr{SddNode_c}, Ptr{SddManager_c}), exists_map, node, manager)
end
function sdd_exists_multiple_static(exists_map::Array{Cint,1}, node::Ptr{SddNode_c}, manager::Ptr{SddManager_c})::Ptr{SddNode_c}
return ccall((:sdd_exists_multiple_static, LIBSDD), Ptr{SddNode_c}, (Array{Cint,1}, Ptr{SddNode_c}, Ptr{SddManager_c}), exists_map, node, manager)
end
function sdd_forall(lit::SddLiteral, node::Ptr{SddNode_c}, manager::Ptr{SddManager_c})::Ptr{SddNode_c}
return ccall((:sdd_forall, LIBSDD), Ptr{SddNode_c}, (SddLiteral, Ptr{SddNode_c}, Ptr{SddManager_c}), lit, node, manager)
end
function sdd_minimize_cardinality(node::Ptr{SddNode_c}, manager::Ptr{SddManager_c})::Ptr{SddNode_c}
return ccall((:sdd_minimize_cardinality, LIBSDD), Ptr{SddNode_c}, (Ptr{SddNode_c}, Ptr{SddManager_c}), node, manager)
end
function sdd_global_minimize_cardinality(node::Ptr{SddNode_c}, manager::Ptr{SddManager_c})::Ptr{SddNode_c}
return ccall((:sdd_global_minimize_cardinality, LIBSDD), Ptr{SddNode_c}, (Ptr{SddNode_c}, Ptr{SddManager_c}), node, manager)
end
function sdd_minimum_cardinality(node::Ptr{SddNode_c})::SddLiteral
return ccall((:sdd_minimum_cardinality, LIBSDD), SddLiteral, (Ptr{SddNode_c}, ), node)
end
function sdd_model_count(node::Ptr{SddNode_c}, manager::Ptr{SddManager_c})::SddModelCount
return ccall((:sdd_model_count, LIBSDD), SddModelCount, (Ptr{SddNode_c}, Ptr{SddManager_c}), node, manager)
end
function sdd_global_model_count(node::Ptr{SddNode_c}, manager::Ptr{SddManager_c})::SddModelCount
return ccall((:sdd_global_model_count, LIBSDD), SddModelCount, (Ptr{SddNode_c}, Ptr{SddManager_c}), node, manager)
end
# // SDD NAVIGATION
function sdd_node_is_true(node::Ptr{SddNode_c})::Int32
return ccall((:sdd_node_is_true, LIBSDD), Cint, (Ptr{SddNode_c}, ), node)
end
function sdd_node_is_false(node::Ptr{SddNode_c})::Int32
return ccall((:sdd_node_is_false, LIBSDD), Cint, (Ptr{SddNode_c}, ), node)
end
function sdd_node_is_literal(node::Ptr{SddNode_c})::Int32
return ccall((:sdd_node_is_literal, LIBSDD), Cint, (Ptr{SddNode_c}, ), node)
end
function sdd_node_is_decision(node::Ptr{SddNode_c})::Int32
return ccall((:sdd_node_is_decision, LIBSDD), Cint, (Ptr{SddNode_c}, ), node)
end
function sdd_node_size(node::Ptr{SddNode_c})::SddNodeSize
return ccall((:sdd_node_size, LIBSDD), SddNodeSize, (Ptr{SddNode_c}, ), node)
end
function sdd_node_literal(node::Ptr{SddNode_c})::SddLiteral
return ccall((:sdd_node_literal, LIBSDD), SddLiteral, (Ptr{SddNode_c}, ), node)
end
function sdd_node_elements(node::Ptr{SddNode_c})::Ptr{Ptr{SddNode_c}}
return ccall((:sdd_node_elements, LIBSDD), Ptr{Ptr{SddNode_c}}, (Ptr{SddNode_c}, ), node)
end
function sdd_node_set_bit(bit::Int32, node::Ptr{SddNode_c})
ccall((:sdd_node_set_bit, LIBSDD), Cvoid, (Cint, Ptr{SddNode_c}), bit, node)
end
function sdd_node_bit(node::Ptr{SddNode_c})::Int32
return ccall((:sdd_node_bit, LIBSDD), Int32, (Ptr{SddNode_c}, ), node)
end
# # SDD FUNCTIONS
#
# SDD FILE I/O
function sdd_read(filename::Ptr{UInt8}, manager::Ptr{SddManager_c})::Ptr{SddNode_c}
return ccall((:sdd_read, LIBSDD), Ptr{SddNode_c}, (Ptr{UInt8}, Ptr{SddManager_c}), filename, manager)
end
function sdd_save(filename::Ptr{UInt8}, node::Ptr{SddNode_c})
ccall((:sdd_save, LIBSDD), Cvoid, (Ptr{UInt8}, Ptr{SddNode_c}), filename, node)
end
function sdd_save_as_dot(filename::Ptr{UInt8}, node::Ptr{SddNode_c})
ccall((:sdd_save_as_dot, LIBSDD), Cvoid, (Ptr{UInt8}, Ptr{SddNode_c}), filename, node)
end
function sdd_shared_save_as_dot(filename::Ptr{UInt8}, manager::Ptr{SddManager_c})
ccall((:sdd_shared_save_as_dot, LIBSDD), Cvoid, (Ptr{UInt8}, Ptr{SddManager_c}), filename, manager)
end
# // SDD SIZE AND NODE COUNT
# //SDD
function sdd_count(node::Ptr{SddNode_c})::SddSize
return ccall((:sdd_count, LIBSDD), SddSize, (Ptr{SddNode_c},), node)
end
function sdd_size(node::Ptr{SddNode_c})::SddSize
return ccall((:sdd_size, LIBSDD), SddSize, (Ptr{SddNode_c},), node)
end
# TODO SddSize sdd_shared_size(SddNode** nodes, SddSize count);
# //SDD OF MANAGER
manager_size_fnames_c = [
"sdd_manager_size", "sdd_manager_live_size", "sdd_manager_dead_size",
"sdd_manager_count", "sdd_manager_live_count", "sdd_manager_dead_count"
]
for fnc in manager_size_fnames_c
@eval begin
function $(Symbol(fnc))(manager::Ptr{SddManager_c})::SddSize
return ccall(($(:($(fnc))), LIBSDD), SddSize, (Ptr{SddManager_c},), manager)
end
end
end
# SDD SIZE OF VTREE
vtree_size_fnames_c = [
"sdd_vtree_size", "sdd_vtree_live_size", "sdd_vtree_dead_size",
"sdd_vtree_size_at", "sdd_vtree_live_size_at", "sdd_vtree_dead_size_at",
"sdd_vtree_size_above", "sdd_vtree_live_size_above", "sdd_vtree_dead_size_above",
"sdd_vtree_count", "sdd_vtree_live_count", "sdd_vtree_dead_count",
"sdd_vtree_count_at", "sdd_vtree_live_count_at", "sdd_vtree_dead_count_at",
"sdd_vtree_count_above", "sdd_vtree_live_count_above", "sdd_vtree_dead_count_above"
]
for fnc in vtree_size_fnames_c
@eval begin
function $(Symbol(fnc))(vtree::Ptr{VTree_c})::SddSize
return ccall(($(:($(fnc))), LIBSDD), SddSize, (Ptr{VTree_c},), vtree)
end
end
end
# CREATING VTREES
function sdd_vtree_new(var_count::SddLiteral, vtree_type::Ptr{UInt8})::Ptr{VTree_c}
return ccall((:sdd_vtree_new, LIBSDD), Ptr{VTree_c}, (SddLiteral, Ptr{UInt8}), var_count, vtree_type)
end
function sdd_vtree_new_with_var_order(var_count::SddLiteral, var_order::Array{SddLiteral,1}, vtree_type::Ptr{UInt8})::Ptr{VTree_c}
return ccall((:sdd_vtree_new_with_var_order, LIBSDD), Ptr{VTree_c}, (SddLiteral, Ptr{SddLiteral}, Ptr{UInt8}), var_count, var_order, vtree_type)
end
function sdd_vtree_new_X_constrained(var_count::SddLiteral, is_X_var::Array{SddLiteral,1}, vtree_type::Ptr{UInt8})::Ptr{VTree_c}
return ccall((:sdd_vtree_new_X_constrained, LIBSDD), Ptr{VTree_c}, (SddLiteral, Ptr{SddLiteral}, Ptr{UInt8}), var_count, is_X_var, vtree_type)
end
function sdd_vtree_free(vtree::Ptr{VTree_c})
ccall((:sdd_vtree_free, LIBSDD), Cvoid, (Ptr{VTree_c},), vtree)
end
# VTREE FILE I/O
function sdd_vtree_read(filename::Ptr{UInt8})::Ptr{VTree_c}
return ccall((:sdd_vtree_read, LIBSDD), Ptr{VTree_c}, (Ptr{UInt8},), filename)
end
function sdd_vtree_save(filename::Ptr{UInt8}, vtree::Ptr{VTree_c})
ccall((:sdd_vtree_save, LIBSDD), Cvoid, (Ptr{UInt8}, Ptr{VTree_c}), filename, vtree)
end
function sdd_vtree_save_as_dot(filename::Ptr{UInt8}, vtree::Ptr{VTree_c})
ccall((:sdd_vtree_save_as_dot, LIBSDD), Cvoid, (Ptr{UInt8}, Ptr{VTree_c}), filename, vtree)
end
# // SDD MANAGER VTREE
function sdd_manager_vtree(manager::Ptr{SddManager_c})::Ptr{VTree_c}
return ccall((:sdd_manager_vtree, LIBSDD), Ptr{VTree_c}, (Ptr{SddManager_c},), manager)
end
function sdd_manager_vtree_copy(manager::Ptr{SddManager_c})::Ptr{VTree_c}
return ccall((:sdd_manager_vtree_copy, LIBSDD), Ptr{VTree_c}, (Ptr{SddManager_c},), manager)
end
# // VTREE NAVIGATION
function sdd_vtree_left(vtree::Ptr{VTree_c})::Ptr{VTree_c}
return ccall((:sdd_vtree_left, LIBSDD), Ptr{VTree_c}, (Ptr{VTree_c},), vtree)
end
function sdd_vtree_right(vtree::Ptr{VTree_c})::Ptr{VTree_c}
return ccall((:sdd_vtree_right, LIBSDD), Ptr{VTree_c}, (Ptr{VTree_c},), vtree)
end
function sdd_vtree_parent(vtree::Ptr{VTree_c})::Ptr{VTree_c}
return ccall((:sdd_vtree_parent, LIBSDD), Ptr{VTree_c}, (Ptr{VTree_c},), vtree)
end
# VTREE FUNCTIONS
function sdd_vtree_is_leaf(vtree::Ptr{VTree_c})::Cint
return ccall((:sdd_vtree_is_leaf, LIBSDD), Cint, (Ptr{VTree_c}, ), vtree)
end
function sdd_vtree_is_sub(vtree1::Ptr{VTree_c}, vtree2::Ptr{VTree_c})::Cint
return ccall((:sdd_vtree_is_sub, LIBSDD), Cint, (Ptr{VTree_c}, Ptr{VTree_c}), vtree1, vtree2)
end
function sdd_vtree_lca(vtree1::Ptr{VTree_c}, vtree2::Ptr{VTree_c}, root::Ptr{VTree_c})::Ptr{VTree_c}
return ccall((:sdd_vtree_lca, LIBSDD), Ptr{VTree_c}, (Ptr{VTree_c}, Ptr{VTree_c}, Ptr{VTree_c}), vtree1, vtree2, root)
end
function sdd_vtree_var_count(vtree::Ptr{VTree_c})::SddLiteral
return ccall((:sdd_vtree_var_count, LIBSDD), SddLiteral, (Ptr{VTree_c}, ), vtree)
end
function sdd_vtree_var(vtree::Ptr{VTree_c})::SddLiteral
return ccall((:sdd_vtree_var, LIBSDD), SddLiteral, (Ptr{VTree_c}, ), vtree)
end
function sdd_vtree_position(vtree::Ptr{VTree_c})::SddLiteral
return ccall((:sdd_vtree_position, LIBSDD), SddLiteral, (Ptr{VTree_c}, ), vtree)
end
# Vtree** sdd_vtree_location(Vtree* vtree, SddManager* manager);
# VTREE/SDD EDIT OPERATIONS
function sdd_vtree_rotate_left(vtree::Ptr{VTree_c}, manager::Ptr{SddManager_c}, limited::Cint)::Cint
return ccall((:sdd_vtree_rotate_left, LIBSDD), Cint, (Ptr{VTree_c},Ptr{SddManager_c}, Cint), vtree, manager, limited)
end
function sdd_vtree_rotate_right(vtree::Ptr{VTree_c}, manager::Ptr{SddManager_c}, limited::Cint)::Cint
return ccall((:sdd_vtree_rotate_right, LIBSDD), Cint, (Ptr{VTree_c},Ptr{SddManager_c}, Cint), vtree, manager, limited)
end
function sdd_vtree_swap(vtree::Ptr{VTree_c}, manager::Ptr{SddManager_c}, limited::Cint)::Cint
return ccall((:sdd_vtree_swap, LIBSDD), Cint, (Ptr{VTree_c}, Ptr{SddManager_c}, Cint), vtree, manager, limited)
end
# LIMITS FOR VTREE/SDD EDIT OPERATIONS
function sdd_manager_init_vtree_size_limit(vtree::Ptr{VTree_c}, manager::Ptr{SddManager_c})
ccall((:sdd_manager_init_vtree_size_limit, LIBSDD), Cvoid, (Ptr{VTree_c}, Ptr{SddManager_c}), vtree, manager)
end
function sdd_manager_update_vtree_size_limit(manager::Ptr{SddManager_c})
ccall((:sdd_manager_update_vtree_size_limit, LIBSDD), Cvoid, (Ptr{SddManager_c},), manager)
end
# # VTREE STATE
# GARBAGE COLLECTION
function sdd_ref_count(node::Ptr{SddNode_c})::SddRefCount
return ccall((:sdd_ref_count, LIBSDD), SddRefCount, (Ptr{SddNode_c},), node)
end
function sdd_ref(node::Ptr{SddNode_c}, manager::Ptr{SddManager_c})::Ptr{SddNode_c}
return ccall((:sdd_ref, LIBSDD), Ptr{SddNode_c}, (Ptr{SddNode_c},Ptr{SddManager_c}), node, manager)
end
function sdd_deref(node::Ptr{SddNode_c}, manager::Ptr{SddManager_c})::Ptr{SddNode_c}
return ccall((:sdd_deref, LIBSDD), Ptr{SddNode_c}, (Ptr{SddNode_c},Ptr{SddManager_c}), node, manager)
end
function sdd_manager_garbage_collect(manager::Ptr{SddManager_c})
ccall((:sdd_manager_garbage_collect, LIBSDD), Cvoid, (Ptr{SddManager_c},), manager)
end
function sdd_vtree_garbage_collect(vtree::Ptr{VTree_c}, manager::Ptr{SddManager_c})
ccall((:sdd_vtree_garbage_collect, LIBSDD), Cvoid, (Ptr{VTree_c}, Ptr{SddManager_c}), vtree, manager)
end
function sdd_manager_garbage_collect_if(dead_node_threshold::Float32, manager::Ptr{SddManager_c})::Int32
return ccall((:sdd_manager_garbage_collect_if, LIBSDD), Int32, (Float32, Ptr{SddManager_c}), dead_node_threshold, manager)
end
function sdd_vtree_garbage_collect_if(dead_node_threshold::Float32, vtree::Ptr{VTree_c}, manager::Ptr{SddManager_c})::Int32
return ccall((:sdd_manager_garbage_collect_if, LIBSDD), Int32, (Float32, Ptr{VTree_c}, Ptr{SddManager_c}), dead_node_threshold, vtree, manager)
end
# MINIMIZATION
function sdd_manager_minimize(manager::Ptr{SddManager_c})
ccall((:sdd_manager_minimize, LIBSDD), Cvoid, (Ptr{SddManager_c},), manager)
end
function sdd_vtree_minimize(vtree::Ptr{VTree_c}, manager::Ptr{SddManager_c})::Ptr{VTree_c}
return ccall((:sdd_manager_minimize, LIBSDD), Ptr{VTree_c}, (Ptr{VTree_c}, Ptr{SddManager_c}), vtree, manager)
end
function sdd_manager_minimize_limited(manager::Ptr{SddManager_c})
ccall((:sdd_manager_minimize_limited, LIBSDD), Cvoid, (Ptr{SddManager_c},), manager)
end
function sdd_vtree_minimize_limited(vtree::Ptr{VTree_c}, manager::Ptr{SddManager_c})::Ptr{VTree_c}
return ccall((:sdd_vtree_minimize_limited, LIBSDD), Ptr{VTree_c}, (Ptr{VTree_c}, Ptr{SddManager_c}), vtree, manager)
end
function sdd_manager_set_vtree_search_convergence_threshold(threshold::Float32, manager::Ptr{SddManager_c})
ccall((:sdd_manager_set_vtree_search_convergence_threshold, LIBSDD), Cvoid, (Float32, Ptr{SddManager_c}), threshold, manager)
end
function sdd_manager_set_vtree_search_time_limit(time_limit::Float32, manager::Ptr{SddManager_c})
ccall((:sdd_manager_set_vtree_search_time_limit, LIBSDD), Cvoid, (Float32, Ptr{SddManager_c}), time_limit, manager)
end
function sdd_manager_set_vtree_fragment_time_limit(time_limit::Float32, manager::Ptr{SddManager_c})
ccall((:sdd_manager_set_vtree_fragment_time_limit, LIBSDD), Cvoid, (Float32, Ptr{SddManager_c}), time_limit, manager)
end
function sdd_manager_set_vtree_operation_time_limit(time_limit::Float32, manager::Ptr{SddManager_c})
ccall((:sdd_manager_set_vtree_operation_time_limit, LIBSDD), Cvoid, (Float32, Ptr{SddManager_c}), time_limit, manager)
end
function sdd_manager_set_vtree_apply_time_limit(time_limit::Float32, manager::Ptr{SddManager_c})
ccall((:sdd_manager_set_vtree_apply_time_limit, LIBSDD), Cvoid, (Float32, Ptr{SddManager_c}), time_limit, manager)
end
function sdd_manager_set_vtree_operation_memory_limit(memory_limit::Float32, manager::Ptr{SddManager_c})
ccall((:sdd_manager_set_vtree_operation_memory_limit, LIBSDD), Cvoid, (Float32, Ptr{SddManager_c}), memory_limit, manager)
end
function sdd_manager_set_vtree_operation_size_limit(size_limit::Float32, manager::Ptr{SddManager_c})
ccall((:sdd_manager_set_vtree_operation_size_limit, LIBSDD), Cvoid, (Float32, Ptr{SddManager_c}), size_limit, manager)
end
function sdd_manager_set_vtree_cartesian_product_limit(size_limit::Float32, manager::Ptr{SddManager_c})
ccall((:sdd_manager_set_vtree_cartesian_product_limit, LIBSDD), Cvoid, (Float32, Ptr{SddManager_c}), size_limit, manager)
end
# WMC
function wmc_manager_new(node::Ptr{SddNode_c}, log_mode::Cint, manager::Ptr{SddManager_c})::Ptr{WmcManager_c}
return ccall((:wmc_manager_new, LIBSDD), Ptr{WmcManager_c}, (Ptr{SddNode_c}, Cint, Ptr{SddManager_c}), node, log_mode, manager)
end
function wmc_manager_free(wmc_manager::Ptr{WmcManager_c})
ccall((:wmc_manager_free, LIBSDD), Cvoid, (Ptr{WmcManager_c},), wmc_manager)
end
function wmc_set_literal_weight(literal::SddLiteral, weight::SddWmc, wmc_manager::Ptr{WmcManager_c})
ccall((:wmc_set_literal_weight, LIBSDD), Cvoid, (SddLiteral, SddWmc, Ptr{WmcManager_c}), literal, weight, wmc_manager)
end
function wmc_propagate(wmc_manager::Ptr{WmcManager_c})::SddWmc
return ccall((:wmc_propagate, LIBSDD), SddWmc, (Ptr{WmcManager_c},), wmc_manager)
end
function wmc_zero_weight(wmc_manager::Ptr{WmcManager_c})::SddWmc
return ccall((:wmc_zero_weight, LIBSDD), SddWmc, (Ptr{WmcManager_c},), wmc_manager)
end
function wmc_one_weight(wmc_manager::Ptr{WmcManager_c})::SddWmc
return ccall((:wmc_one_weight, LIBSDD), SddWmc, (Ptr{WmcManager_c},), wmc_manager)
end
function wmc_literal_weight(literal::SddLiteral, wmc_manager::Ptr{WmcManager_c})::SddWmc
return ccall((:wmc_literal_weight, LIBSDD), SddWmc, (SddLiteral, Ptr{WmcManager_c}), literal, wmc_manager)
end
function wmc_literal_derivative(literal::SddLiteral, wmc_manager::Ptr{WmcManager_c})::SddWmc
return ccall((:wmc_literal_derivative, LIBSDD), SddWmc, (SddLiteral, Ptr{WmcManager_c}), literal, wmc_manager)
end
function wmc_literal_pr(literal::SddLiteral, wmc_manager::Ptr{WmcManager_c})::SddWmc
return ccall((:wmc_literal_pr, LIBSDD), SddWmc, (SddLiteral, Ptr{WmcManager_c}), literal, wmc_manager)
end
end
# TO BE WRAPPED
# // SDD FUNCTIONS
# SddSize sdd_id(SddNode* node);
# int sdd_garbage_collected(SddNode* node, SddSize id);
# Vtree* sdd_vtree_of(SddNode* node);
# SddNode* sdd_copy(SddNode* node, SddManager* dest_manager);
# SddNode* sdd_rename_variables(SddNode* node, SddLiteral* variable_map, SddManager* manager);
# int* sdd_variables(SddNode* node, SddManager* manager);
# // VTREE STATE
# int sdd_vtree_bit(const Vtree* vtree);
# void sdd_vtree_set_bit(int bit, Vtree* vtree);
# void* sdd_vtree_data(Vtree* vtree);
# void sdd_vtree_set_data(void* data, Vtree* vtree);
# void* sdd_vtree_search_state(const Vtree* vtree);
# void sdd_vtree_set_search_state(void* search_state, Vtree* vtree);
| SententialDecisionDiagrams | https://github.com/pedrozudo/SententialDecisionDiagrams.jl.git |
|
[
"Apache-2.0"
] | 0.1.2 | 1282e7eb111aed8b74f0bf5b9b7a27fa1424088e | docs | 902 | # SententialDecisionDiagrams.jl
Julia wrapper package to interactively use [Sententical Decision Diagrams (SDD)](http://reasoning.cs.ucla.edu/sdd/).
## Installtion
```
pkg> add SententialDecisionDiagrams
```
## Test
Clone the repo:
```
git clone https://github.com/pedrozudo/SententialDecisionDiagrams.jl
```
cd into the cloned directory and run any of the test files
```
cd SententialDecisionDiagrams.jl
julia examples/test-1.jl
```
## References
Other languages:
* C: http://reasoning.cs.ucla.edu/sdd/
* Java: https://github.com/jessa/JSDD
* Python: https://github.com/wannesm/PySDD
## Contact
* Pedro Zuidberg Dos Martires, KU Leuven, https://pedrozudo.github.io/
## License
Julia SDD wrapper:
Copyright 2019, KU Leuven under the Apache License, Version 2.0.
SDD package:
Copyright 2013-2018, Regents of the University of California
Licensed under the Apache License, Version 2.0.
| SententialDecisionDiagrams | https://github.com/pedrozudo/SententialDecisionDiagrams.jl.git |
|
[
"MIT"
] | 0.2.6 | e5acfa4a9c84252491860939a2af7f4ffe501057 | code | 252 | using Documenter
using POMDPTesting
makedocs(
format =:html,
sitename = "POMDPTesting.jl"
)
deploydocs(
repo = "github.com/JuliaPOMDP/POMDPTesting.jl.git",
julia = "1.0",
target = "build",
deps = nothing,
make = nothing
)
| POMDPTesting | https://github.com/JuliaPOMDP/POMDPTesting.jl.git |
|
[
"MIT"
] | 0.2.6 | e5acfa4a9c84252491860939a2af7f4ffe501057 | code | 86 | module POMDPTesting
using Reexport
@reexport using POMDPTools.Testing
end # module
| POMDPTesting | https://github.com/JuliaPOMDP/POMDPTesting.jl.git |
|
[
"MIT"
] | 0.2.6 | e5acfa4a9c84252491860939a2af7f4ffe501057 | code | 1605 | using Test
using POMDPs
using POMDPTesting
using POMDPModelTools
import POMDPs:
transition,
observation,
initialstate,
updater,
states,
actions,
observations
struct TestPOMDP <: POMDP{Bool, Bool, Bool} end
updater(problem::TestPOMDP) = DiscreteUpdater(problem)
initialstate(::TestPOMDP) = BoolDistribution(0.0)
transition(p::TestPOMDP, s, a) = BoolDistribution(0.5)
observation(p::TestPOMDP, a, sp) = BoolDistribution(0.5)
states(p::TestPOMDP) = (true, false)
actions(p::TestPOMDP) = (true, false)
observations(p::TestPOMDP) = (true, false)
@testset "model" begin
m = TestPOMDP()
@test has_consistent_initial_distribution(m)
@test has_consistent_transition_distributions(m)
@test has_consistent_observation_distributions(m)
@test has_consistent_distributions(m)
end
@testset "old model" begin
probability_check(TestPOMDP())
end
@testset "support mismatch" begin
struct SupportMismatchPOMDP <: POMDP{Int, Int, Int} end
POMDPs.states(::SupportMismatchPOMDP) = 1:2
POMDPs.actions(::SupportMismatchPOMDP) = 1:2
POMDPs.observations(::SupportMismatchPOMDP) = 1:2
POMDPs.initialstate(::SupportMismatchPOMDP) = Deterministic(3)
POMDPs.transition(::SupportMismatchPOMDP, s, a) = SparseCat([1, 2, 3], [1.0, 0.0, 0.1])
POMDPs.observation(::SupportMismatchPOMDP, s, a, sp) = SparseCat([1, 2, 3], [1.0, 0.0, 0.1])
@test !has_consistent_transition_distributions(SupportMismatchPOMDP())
@test !has_consistent_observation_distributions(SupportMismatchPOMDP())
@test !has_consistent_distributions(SupportMismatchPOMDP())
end
| POMDPTesting | https://github.com/JuliaPOMDP/POMDPTesting.jl.git |
|
[
"MIT"
] | 0.2.6 | e5acfa4a9c84252491860939a2af7f4ffe501057 | docs | 203 | # ~~POMDPTesting.jl~~
POMDPTesting has been deprecated and its functionality has been moved to [POMDPTools](https://github.com/JuliaPOMDP/POMDPs.jl/tree/master/lib/POMDPTools). Please use that instead.
| POMDPTesting | https://github.com/JuliaPOMDP/POMDPTesting.jl.git |
|
[
"MIT"
] | 0.2.6 | e5acfa4a9c84252491860939a2af7f4ffe501057 | docs | 167 | # About
POMDPTesting is a collection of utilities for testing various models and solvers for [POMDPs.jl](https://github.com/JuliaPOMDP/POMDPs.jl).
```@contents
```
| POMDPTesting | https://github.com/JuliaPOMDP/POMDPTesting.jl.git |
|
[
"MIT"
] | 0.2.6 | e5acfa4a9c84252491860939a2af7f4ffe501057 | docs | 168 | # Model
```@docs
has_consistent_distributions
has_consistent_initial_distribution
has_consistent_transition_distributions
has_consistent_observation_distributions
```
| POMDPTesting | https://github.com/JuliaPOMDP/POMDPTesting.jl.git |
|
[
"MIT"
] | 0.2.6 | e5acfa4a9c84252491860939a2af7f4ffe501057 | docs | 35 | # Solver
```@docs
test_solver
```
| POMDPTesting | https://github.com/JuliaPOMDP/POMDPTesting.jl.git |
|
[
"MIT"
] | 0.1.4 | ea00ee2ef140aa82b63037bef01e1baf91cdaa4d | code | 662 | using Documenter, Distributions, InitialMassFunctions
# The `format` below makes it so that urls are set to "pretty" if you are pushing them to a hosting service, and basic if you are just using them locally to make browsing easier.
makedocs(
sitename="InitialMassFunctions.jl",
modules = [InitialMassFunctions],
format = Documenter.HTML(;prettyurls = get(ENV, "CI", nothing) == "true"),
authors = "Chris Garling",
pages = ["index.md","types.md","utilities.md","docindex.md"],
doctest=true
)
deploydocs(;
repo = "github.com/cgarling/InitialMassFunctions.jl.git",
versions = ["stable" => "v^", "v#.#"],
push_preview=true,
)
| InitialMassFunctions | https://github.com/cgarling/InitialMassFunctions.jl.git |
|
[
"MIT"
] | 0.1.4 | ea00ee2ef140aa82b63037bef01e1baf91cdaa4d | code | 1089 | module InitialMassFunctions
import Distributions: ContinuousUnivariateDistribution, Pareto, LogNormal, truncated, Truncated, mean, median, var, skewness, kurtosis, pdf, logpdf, cdf, ccdf, minimum, maximum, partype, quantile, cquantile, sampler, rand, Sampleable, Univariate, Continuous, eltype, params
import Random: AbstractRNG
import SpecialFunctions: erf, erfinv
""" Abstract type for IMFs; a subtype of `Distributions.ContinuousUnivariateDistribution`, as all IMF models can be described as continuous, univariate PDFs. """
abstract type AbstractIMF <: ContinuousUnivariateDistribution end
include("powerlaw.jl")
include("lognormal.jl")
export PowerLawIMF, Salpeter1955, Kroupa2001, Chabrier2001BPL # power law constructors
export LogNormalIMF, Chabrier2003, Chabrier2003System, Chabrier2001LogNormal # lognormal constructors
export BrokenPowerLaw, LogNormalBPL, mean, median, var, skewness, kurtosis, pdf, logpdf, cdf, ccdf, partype, minimum, maximum, quantile, quantile!, cquantile
# export normalization, slope, logslope, dndm, dndlogm, pdf, logpdf, cdf, median
end # module
| InitialMassFunctions | https://github.com/cgarling/InitialMassFunctions.jl.git |
|
[
"MIT"
] | 0.1.4 | ea00ee2ef140aa82b63037bef01e1baf91cdaa4d | code | 19685 | """
LogNormalIMF(μ::Real, σ::Real, mmin::Real, mmax::Real)
Describes a lognormal IMF with probability distribution
```math
\\frac{dn(m)}{dm} = \\frac{A}{x} \\, \\exp \\left[ \\frac{ -\\left( \\log(x) - \\mu \\right)^2}{2\\sigma^2} \\right]
```
truncated such that the probability distribution is 0 below `mmin` and above `mmax`. `A` is a normalization constant such that the distribution integrates to 1 from the minimum valid stellar mass `mmin` to the maximum valid stellar mass `mmax`. This is simply `Distributions.truncated(Distributions.LogNormal(μ,σ);lower=mmin,upper=mmax)`. See the documentation for [`LogNormal`](https://juliastats.org/Distributions.jl/stable/univariate/#Distributions.LogNormal) and [`truncated`](https://juliastats.org/Distributions.jl/latest/truncate/#Distributions.truncated).
# Arguments
- `μ`; see [Distributions.LogNormal](https://juliastats.org/Distributions.jl/stable/univariate/#Distributions.LogNormal)
- `σ`; see [Distributions.LogNormal](https://juliastats.org/Distributions.jl/stable/univariate/#Distributions.LogNormal)
"""
LogNormalIMF(μ::Real, σ::Real, mmin::Real, mmax::Real) = truncated(LogNormal(μ,σ);lower=mmin,upper=mmax)
function mean(d::Truncated{LogNormal{T}, Continuous, T}) where T
mmin, mmax = extrema(d)
μ, σ = params( d.untruncated )
# return (α * θ^α / (1-α) / d.ucdf) * (mmax^(1-α) - mmin^(1-α))
return -exp(μ + σ^2/2) / 2 / (d.ucdf - d.lcdf) *
( erf( (μ + σ^2 - log(mmax)) / (sqrt(2)*σ) ) - erf( (μ + σ^2 - log(mmin)) / (sqrt(2)*σ) ) )
end
"""
Chabrier2001LogNormal(mmin::Real=0.08, mmax::Real=Inf)
Function to instantiate the [Chabrier 2001](https://ui.adsabs.harvard.edu/abs/2001ApJ...554.1274C/abstract) lognormal IMF for single stars. Returns an instance of `Distributions.Truncated(Distributions.LogNormal)`. See also [`Chabrier2003`](@ref) which has the same lognormal form for masses below one solar mass, but a power law extension at higher masses.
"""
Chabrier2001LogNormal(mmin::Real=0.08, mmax::Real=Inf) = LogNormalIMF(log(0.1), 0.627*log(10), mmin, mmax)
"""
lognormal_integral(μ, σ, b1, b2)
Definite integral of the lognormal probability distribution from `b1` to `b2`.
```math
\\int_{b1}^{b2} \\, \\frac{A}{x} \\, \\exp \\left[ \\frac{ -\\left( \\log(x) - \\mu \\right)^2}{2\\sigma^2} \\right] \\, dx
```
"""
lognormal_integral(A::T,μ::T,σ::T,b1::T,b2::T) where {T<:Number} =
A * sqrt(T(π)/2) * σ * (erf( (μ-log(b1))/(sqrt(T(2))*σ)) - erf( (μ-log(b2))/(sqrt(T(2))*σ)))
lognormal_integral(A::Number,μ::Number,σ::Number,b1::Number,b2::Number) = lognormal_integral(promote(A,μ,σ,b1,b2)...)
# lognormal_integral(A,μ,σ,b1,b2) = A * sqrt(π/2) * σ * (erf( (μ-log(b1))/(sqrt(2)*σ)) - erf( (μ-log(b2))/(sqrt(2)*σ)))
"""
LogNormalBPL(μ::Real,σ::Real,α::AbstractVector{<:Real},breakpoints::AbstractVector{<:Real})
LogNormalBPL(μ::Real,σ::Real,α::Tuple,breakpoints::Tuple)
A LogNormal distribution at low masses, with a broken power law extension at high masses. This uses the natural log base like [Distributions.LogNormal](https://juliastats.org/Distributions.jl/stable/univariate/#Distributions.LogNormal); if you have σ and μ in base 10, then multiply them both by `log(10)`. Must have `length(α) == length(breakpoints)-2`. The probability distribution for this IMF model is
```math
\\frac{dn(m)}{dm} = \\frac{A}{x} \\, \\exp \\left[ \\frac{ -\\left( \\log(x) - \\mu \\right)^2}{2\\sigma^2} \\right]
```
for `m < breakpoints[2]`, with a broken power law extension above this mass. See [`BrokenPowerLaw`](@ref) for interface details; the `α` and `breakpoints` are the same here as there.
# Arguments
- `μ`; see [Distributions.LogNormal](https://juliastats.org/Distributions.jl/stable/univariate/#Distributions.LogNormal)
- `σ`; see [Distributions.LogNormal](https://juliastats.org/Distributions.jl/stable/univariate/#Distributions.LogNormal)
- `α`; list of power law indices with `length(α) == length(breakpoints)-2`.
- `breakpoints`; list of masses that signal breaks in the IMF. MUST BE SORTED and bracketed with `breakpoints[1]` being the minimum valid mass and `breakpoints[end]` being the maximum valid mass.
# Examples
If you want a `LogNormalBPL` with a characteristic mass of 0.5 solar masses, log10 standard deviation of 0.6, and a single power law extension with slope `α=2.35` with a break at 1 solar mass, you would do `LogNormalBPL(log(0.5),0.6*log(10),[2.35],[0.08,1.0,Inf]` where we set the minimum mass to `0.08` and maximum mass to `Inf`. If, instead, you know that `log10(m)=x`, where `m` is the characteristic mass of the `LogNormal` component, you would do `LogNormalBPL(x*log(10),0.6*log(10),[2.35],[0.08,1.0,Inf]`.
# Notes
There is some setup necessary for `quantile` and other derived methods, so it is more efficient to call these methods directly with an array via the call signature `quantile(d::LogNormalBPL{T}, x::AbstractArray{S})` rather than broadcasting over `x`. This behavior is now deprecated for `quantile(d::Distributions.UnivariateDistribution, X::AbstractArray)` in Distributions.jl.
# Methods
- `Base.convert(::Type{LogNormalBPL{T}}, d::LogNormalBPL)`
- `minimum(d::LogNormalBPL)`
- `maximum(d::LogNormalBPL)`
- `partype(d::LogNormalBPL)`
- `eltype(d::LogNormalBPL)`
- `mean(d::LogNormalBPL)`
- `median(d::LogNormalBPL)`
- `pdf(d::LogNormalBPL,x::Real)`
- `logpdf(d::LogNormalBPL,x::Real)`
- `cdf(d::LogNormalBPL,x::Real)`
- `ccdf(d::LogNormalBPL,x::Real)`
- `quantile(d::LogNormalBPL{S},x::T) where {S,T<:Real}`
- `quantile!(result::AbstractArray,d::LogNormalBPL{S},x::AbstractArray{T}) where {S,T<:Real}`
- `quantile(d::LogNormalBPL{T},x::AbstractArray{S})`
- `cquantile(d::LogNormalBPL{S},x::T) where {S,T<:Real}`
- `rand(rng::AbstractRNG, d::LogNormalBPL,s...)`
- Other methods from `Distributions.jl` should also work because `LogNormalBPL <: AbstractIMF <: Distributions.ContinuousUnivariateDistribution`. For example, `rand!(rng::AbstractRNG, d::LogNormalBPL, x::AbstractArray)`.
"""
struct LogNormalBPL{T} <: AbstractIMF
μ::T
σ::T
A::Vector{T} # normalization parameters
α::Vector{T} # power law indexes
breakpoints::Vector{T} # bounds of each break
LogNormalBPL{T}(μ::T, σ::T, A::Vector{T}, α::Vector{T}, breakpoints::Vector{T}) where {T} =
new{T}(μ, σ, A, α, breakpoints)
end
function LogNormalBPL(μ::T, σ::T, α::Vector{T}, breakpoints::Vector{T}) where T <: Real
@assert length(breakpoints) == length(α)+2
@assert breakpoints[1] > 0
nbreaks = length(α) + 1
A = Vector{T}(undef,nbreaks)
A[1] = one(T)
# solve for the prefactor for the first power law after the lognormal component
A[2] = breakpoints[2]^(α[1]-1) * exp( -(log(breakpoints[2])-μ)^2/(2*σ^2))
# if there is more than one power law component
if nbreaks > 2
for i in 3:nbreaks
A[i] = A[i-1]*breakpoints[i]^-α[i-2] / breakpoints[i]^-α[i-1]
end
end
# now A contains prefactors for each distribution component that makes them continuous
# with the first lognormal component having a prefactor of 1. Now we need to normalize
# the integral from minimum(breaks) to maximum(breaks) to equal 1 by dividing
# the entire A array by a common factor.
total_integral = zero(T)
total_integral += lognormal_integral(A[1], μ, σ, breakpoints[1], breakpoints[2])
for i in 2:nbreaks
total_integral += pl_integral(A[i], α[i-1], breakpoints[i], breakpoints[i+1])
end
A ./= total_integral
return LogNormalBPL{T}(μ, σ, A, α, breakpoints)
end
LogNormalBPL(μ::Real, σ::Real, α::Tuple, breakpoints::Tuple) =
LogNormalBPL(μ,σ,collect(promote(α...)),collect(promote(breakpoints...)))
LogNormalBPL(μ::T,σ::T,α::AbstractVector{T},breakpoints::AbstractVector{T}) where T<:Real =
LogNormalBPL(μ,σ,convert(Vector{T},α),convert(Vector{T},breakpoints))
function LogNormalBPL(μ::A, σ::B, α::AbstractVector{C}, breakpoints::AbstractVector{D}) where {A<:Real, B<:Real, C<:Real, D<:Real}
X = promote_type(A, B, C, D)
LogNormalBPL(convert(X,μ),convert(X,σ),convert(Vector{X},α), convert(Vector{X},breakpoints))
end
#### Conversions
Base.convert(::Type{LogNormalBPL{T}}, d::LogNormalBPL) where T <: Real =
LogNormalBPL{T}(convert(T,d.μ), convert(T,d.σ), convert(Vector{T},d.A), convert(Vector{T},d.α), convert(Vector{T},d.breakpoints))
Base.convert(::Type{LogNormalBPL{T}}, d::LogNormalBPL{T}) where T <: Real = d
#### Parameters
params(d::LogNormalBPL) = d.μ, d.σ, d.A, d.α, d.breakpoints
minimum(d::LogNormalBPL) = minimum(d.breakpoints)
maximum(d::LogNormalBPL) = maximum(d.breakpoints)
partype(d::LogNormalBPL{T}) where T = T
eltype(d::LogNormalBPL{T}) where T = T
#### Statistics
function mean(d::LogNormalBPL{T}) where T
μ,σ,A,α,breakpoints = params(d)
m = zero(T)
# m += A[1] * exp(μ + σ^2/2) * sqrt(π/2) * σ * (erf( (μ+σ^2-log(breakpoints[1])) / (sqrt(2)*σ) ) -
# erf( (μ+σ^2-log(breakpoints[2])) / (sqrt(2)*σ) ) )
m += A[1] * exp(μ + σ^2/2) * sqrt(T(π)/2) * σ * (erf( (μ+σ^2-log(breakpoints[1])) / (sqrt(T(2))*σ) ) -
erf( (μ+σ^2-log(breakpoints[2])) / (sqrt(T(2))*σ) ) )
m += sum( (A[i]*breakpoints[i+1]^(2-α[i-1])/(2-α[i-1]) -
A[i]*breakpoints[i]^(2-α[i-1])/(2-α[i-1]) for i in 2:length(A) ) )
return m
end
median(d::LogNormalBPL{T}) where T = quantile(d, T(0.5)) # this is temporary
# mode(d::BrokenPowerLaw) = d.breakpoints[argmin(d.α)] # this is not always correct
#### Evaluation
function pdf(d::LogNormalBPL, x::Real)
((x < minimum(d)) || (x > maximum(d))) && (return zero(partype(d)))
μ, σ, A, α, breakpoints = params(d)
idx = findfirst(>=(x), breakpoints)
((idx==1) || (idx==2)) ? (return A[1] / x * exp(-(log(x)-μ)^2/(2*σ^2))) : (return A[idx-1] * x^-α[idx-2])
end
function logpdf(d::LogNormalBPL, x::Real)
if ((x >= minimum(d)) && (x <= maximum(d)))
μ, σ, A, α, breakpoints = params(d)
idx = findfirst(>=(x), breakpoints)
((idx==1) || (idx==2)) ? (return log(A[1]) - log(x) - (log(x)-μ)^2/(2*σ^2)) : (return log(A[idx-1]) - α[idx-2]*log(x))
else
T = partype(d)
return -T(Inf)
end
end
function cdf(d::LogNormalBPL, x::Real)
if x <= minimum(d)
return zero(partype(d))
elseif x >= maximum(d)
return one(partype(d))
end
μ, σ, A, α, breakpoints = params(d)
idx = findfirst(>=(x), breakpoints)
result = lognormal_integral(A[1], μ, σ, breakpoints[1], min(x,breakpoints[2]))
idx > 2 && (result += sum(pl_integral(A[i],α[i-1],breakpoints[i],min(x,breakpoints[i+1])) for i in 2:idx-1))
return result
end
ccdf(d::LogNormalBPL, x::Real) = one(partype(d)) - cdf(d,x)
function quantile(d::LogNormalBPL{S}, x::T) where {S, T <: Real}
U = promote_type(S, T)
x <= zero(T) && (return U(minimum(d)))
x >= one(T) && (return U(maximum(d)))
μ, σ, A, α, breakpoints = params(d)
nbreaks = length(A)
# this works but the tuple interpolation is slow and allocating, so switch to a vector
# integrals = cumsum( (lognormal_integral(A[1],μ,σ,breakpoints[1],breakpoints[2]), (pl_integral(A[i],α[i-1],breakpoints[i],breakpoints[i+1]) for i in 2:nbreaks)...) ) # calculate the cumulative integral up to each breakpoint
integrals = Array{U}(undef,nbreaks)
integrals[1] = lognormal_integral(A[1],μ,σ,breakpoints[1],breakpoints[2])
for i in 2:nbreaks
integrals[i] = pl_integral(A[i],α[i-1],breakpoints[i],breakpoints[i+1])
end
cumsum!(integrals, integrals)
idx = findfirst(>=(x), integrals) # find the first breakpoint where the cumulative integral
if idx == 1
# return exp(μ - sqrt(2) * σ * erfinv( (A[1] * π * σ * erf((μ-log(breakpoints[1]))/(sqrt(2)*σ)) - sqrt(2π)*x) / (A[1]*π*σ) ))
return exp(μ - sqrt(U(2)) * σ * erfinv( (A[1] * π * σ * erf((μ-log(breakpoints[1]))/(sqrt(U(2))*σ)) - sqrt(U(2π))*x) / (A[1]*π*σ) ))
else
x -= integrals[idx-1] # If this is not the first breakpoint, then subtract off the cumulative integral and solve
a = one(S) - α[idx-1] # using power law CDF inversion
return (x*a/A[idx] + breakpoints[idx]^a)^inv(a)
end
end
function quantile!(result::AbstractArray{U}, d::LogNormalBPL{S}, x::AbstractArray{T}) where {S, T<:Real, U<:Real}
@assert axes(result) == axes(x)
μ, σ, A, α, breakpoints = params(d)
nbreaks = length(A)
# this works but the tuple interpolation is slow and allocating, so switch to a vector
# integrals = cumsum( (lognormal_integral(A[1],μ,σ,breakpoints[1],breakpoints[2]), (pl_integral(A[i],α[i-1],breakpoints[i],breakpoints[i+1]) for i in 2:nbreaks)...) ) # calculate the cumulative integral up to each breakpoint
integrals = Array{eltype(result)}(undef,nbreaks)
integrals[1] = lognormal_integral(A[1],μ,σ,breakpoints[1],breakpoints[2])
@inbounds for i in 2:nbreaks
integrals[i] = pl_integral(A[i],α[i-1],breakpoints[i],breakpoints[i+1])
end
cumsum!(integrals, integrals)
@inbounds for i in eachindex(x)
xi = x[i]
xi <= zero(T) && (result[i]=minimum(d); continue)
xi >= one(T) && (result[i]=maximum(d); continue)
idx = findfirst(>=(xi), integrals) # find the first breakpoint where the cumulative integral # up to each breakpoint
if idx == 1
# result[i] = exp(μ - sqrt(2) * σ * erfinv( (A[1] * π * σ * erf((μ-log(breakpoints[1]))/(sqrt(2)*σ)) - sqrt(2π)*xi) / (A[1]*π*σ) ))
result[i] = exp(μ - sqrt(U(2)) * σ * erfinv( (A[1] * π * σ * erf((μ-log(breakpoints[1]))/(sqrt(U(2))*σ)) - sqrt(U(2π))*xi) / (A[1]*π*σ) ))
else
xi -= integrals[idx-1] # If this is not the first breakpoint, then subtract off the cumulative integral and solve
a = one(S) - α[idx-1] # using power law CDF inversion
result[i] = (xi*a/A[idx] + breakpoints[idx]^a)^inv(a)
end
end
return result
end
quantile(d::LogNormalBPL{T}, x::AbstractArray{S}) where {T, S <: Real} =
quantile!(Array{promote_type(T,S)}(undef,size(x)), d, x)
cquantile(d::LogNormalBPL,x::Real) = quantile(d, 1-x)
#### Random sampling
struct LogNormalBPLSampler{T} <: Sampleable{Univariate,Continuous}
μ::T # lognormal mean
σ::T # lognormal standard deviation
A::Vector{T} # normalization parameters
α::Vector{T} # power law indexes
breakpoints::Vector{T} # bounds of each break
integrals::Vector{T} # cumulative integral up to each breakpoint
end
function LogNormalBPLSampler(d::LogNormalBPL)
μ, σ, A, α, breakpoints = params(d)
nbreaks = length(A)
# this works but the tuple interpolation is slow and allocating, so switch to a vector
# integrals = cumsum( (lognormal_integral(A[1],μ,σ,breakpoints[1],breakpoints[2]), (pl_integral(A[i],α[i-1],breakpoints[i],breakpoints[i+1]) for i in 2:nbreaks)...) ) # calculate the cumulative integral up to each breakpoint
integrals = Array{partype(d)}(undef,nbreaks)
integrals[1] = lognormal_integral(A[1],μ,σ,breakpoints[1],breakpoints[2])
@inbounds for i in 2:nbreaks
integrals[i] = pl_integral(A[i],α[i-1],breakpoints[i],breakpoints[i+1])
end
cumsum!(integrals, integrals)
LogNormalBPLSampler(μ, σ, A, α, breakpoints, integrals)
end
function rand(rng::AbstractRNG, s::LogNormalBPLSampler{T}) where T
x = rand(rng, T)
μ, σ, A, α, breakpoints, integrals = s.μ, s.σ, s.A, s.α, s.breakpoints, s.integrals
idx = findfirst(>=(x), integrals) # find the first breakpoint where the cumulative integral # up to each breakpoint
if idx == 1
# return exp(μ - sqrt(2) * σ * erfinv( (A[1] * π * σ * erf((μ-log(breakpoints[1]))/(sqrt(2)*σ)) - sqrt(2π)*x) / (A[1]*π*σ) ))
return exp(μ - sqrt(T(2)) * σ * erfinv( (A[1] * π * σ * erf((μ-log(breakpoints[1]))/(sqrt(T(2))*σ)) - sqrt(T(2π))*x) / (A[1]*π*σ) ))
else
x -= integrals[idx-1] # If this is not the first breakpoint, then subtract off the cumulative integral and solve
a = one(T) - α[idx-1] # using power law CDF inversion
return (x*a/A[idx] + breakpoints[idx]^a)^inv(a)
end
end
sampler(d::LogNormalBPL) = LogNormalBPLSampler(d)
rand(rng::AbstractRNG, d::LogNormalBPL) = rand(rng, sampler(d))
#######################################################
# Specific types of LogNormalBPL
#######################################################
const chabrier2003_α = [2.3]
const chabrier2003_breakpoints = [0.0,1.0,Inf]
const chabrier2003_μ = log(0.079)#*log(10)
const chabrier2003_σ = 0.69*log(10)
"""
Chabrier2003(mmin::Real=0.08, mmax::Real=Inf)
Function to instantiate the [Chabrier 2003](https://ui.adsabs.harvard.edu/abs/2003PASP..115..763C/abstract) IMF for single stars. This is a lognormal IMF with a power-law extension for masses greater than one solar mass. This IMF is valid for single stars and takes parameters from the "Disk and Young Clusters" column of Table 2 in the above paper. This will return an instance of [`LogNormalBPL`](@ref). See also [`Chabrier2003System`](@ref) which implements the IMF for general stellar systems with multiplicity from this same paper, and [`Chabrier2001LogNormal`](@ref) which has the same lognormal form as this model but without a high-mass power law extension.
"""
function Chabrier2003(mmin::T=0.08, mmax::T=Inf) where T <: Real
@assert mmin > 0
mmin > one(T) && return PowerLaw(2.3,mmin,mmax) # if mmin>1, we are ONLY using the power law extension, so return power law IMF.
mmax < one(T) && return truncated(LogNormal(chabrier2003_μ,chabrier2003_σ);lower=mmin,upper=mmax) # if mmax<1, we are ONLY using the lognormal component, so return lognormal IMF.
idx1 = findfirst(>(mmin), chabrier2003_breakpoints)-1
idx2 = findfirst(>=(mmax), chabrier2003_breakpoints)
bp = convert(Vector{T}, chabrier2003_breakpoints[idx1:idx2])
bp[1] = mmin
bp[end] = mmax
LogNormalBPL(convert(T,chabrier2003_μ), convert(T,chabrier2003_σ), convert(Vector{T},chabrier2003_α[idx1:(idx2-2)]), bp)
end
Chabrier2003(mmin::Real, mmax::Real) = Chabrier2003(promote(mmin,mmax)...)
#######################################################
const chabrier2003_system_μ = log(0.22)
const chabrier2003_system_σ = 0.57*log(10)
"""
Chabrier2003System(mmin::Real=0.08, mmax::Real=Inf)
Function to instantiate the [Chabrier 2003](https://ui.adsabs.harvard.edu/abs/2003PASP..115..763C/abstract) system IMF. This is a lognormal IMF with a power-law extension for masses greater than one solar mass. This IMF is valid for general star systems with stellar multiplicity (e.g., binaries) and differs from the typical single-star models. Parameters for this distribution are taken from Equation 18 in the above paper. This will return an instance of [`LogNormalBPL`](@ref). See also [`Chabrier2003`](@ref) for the single star IMF.
"""
function Chabrier2003System(mmin::T=0.08, mmax::T=Inf) where T <: Real
@assert mmin > 0
mmin > one(T) && return PowerLaw(2.3,mmin,mmax) # if mmin>1, we are ONLY using the power law extension, so return power law IMF.
mmax < one(T) && return truncated(LogNormal(chabrier2003_system_μ,chabrier2003_system_σ);lower=mmin,upper=mmax) # if mmax<1, we are ONLY using the lognormal component, so return lognormal IMF.
idx1 = findfirst(>(mmin), chabrier2003_breakpoints)-1
idx2 = findfirst(>=(mmax), chabrier2003_breakpoints)
bp = convert(Vector{T}, chabrier2003_breakpoints[idx1:idx2])
bp[1] = mmin
bp[end] = mmax
LogNormalBPL(convert(T,chabrier2003_system_μ), convert(T,chabrier2003_system_σ), convert(Vector{T},chabrier2003_α[idx1:(idx2-2)]), bp)
end
Chabrier2003System(mmin::Real, mmax::Real) = Chabrier2003System(promote(mmin,mmax)...)
| InitialMassFunctions | https://github.com/cgarling/InitialMassFunctions.jl.git |
|
[
"MIT"
] | 0.1.4 | ea00ee2ef140aa82b63037bef01e1baf91cdaa4d | code | 15379 | ###########################################################################################
# Power Law
###########################################################################################
"""
PowerLawIMF(α::Real, mmin::Real, mmax::Real)
Descibes a single power-law IMF with probability distribution
```math
\\frac{dn(m)}{dm} = A \\times m^{-\\alpha}
```
truncated such that the probability distribution is 0 below `mmin` and above `mmax`. `A` is a normalization constant such that the distribution integrates to 1 from the minimum valid stellar mass `mmin` to the maximum valid stellar mass `mmax`. This is simply `Distributions.truncated(Distributions.Pareto(α-1,mmin);upper=mmax)`. See the documentation for [`Pareto`](https://juliastats.org/Distributions.jl/latest/univariate/#Distributions.Pareto) and [`truncated`](https://juliastats.org/Distributions.jl/latest/truncate/#Distributions.truncated).
"""
PowerLawIMF(α::Real, mmin::Real, mmax::Real) = truncated(Pareto(α-1, mmin); upper=mmax)
function mean(d::Truncated{Pareto{T}, Continuous, T}) where T
mmin, mmax = extrema(d)
α, θ = params( d.untruncated )
return (α * θ^α / (1-α) / d.ucdf) * (mmax^(1-α) - mmin^(1-α))
end
"""
Salpeter1955(mmin::Real=0.4, mmax::Real=Inf)
The IMF model of [Salpeter 1955](https://ui.adsabs.harvard.edu/abs/1955ApJ...121..161S/abstract), a [`PowerLawIMF`](@ref) with `α=2.35`.
"""
Salpeter1955(mmin::T, mmax::T) where T <: Real= PowerLawIMF(T(2.35), mmin, mmax)
Salpeter1955(mmin::Real=0.4, mmax::Real=Inf) = Salpeter1955(promote(mmin, mmax)...)
###########################################################################################
# Broken Power Law
###########################################################################################
"""
pl_integral(A,α,b1,b2)
Definite integral of power law ``A*x^{-α}`` from b1 (lower) to b2 (upper).
```math
\\int_{b1}^{b2} \\, A \\times x^{-\\alpha} \\, dx = \\frac{A}{1-\\alpha} \\times \\left( b2^{1-\\alpha} - b1^{1-\\alpha} \\right)
```
"""
pl_integral(A, α, b1, b2) = A/(1-α) * (b2^(1-α) - b1^(1-α))
"""
BrokenPowerLaw(α::AbstractVector{T}, breakpoints::AbstractVector{S}) where {T<:Real,S<:Real}
BrokenPowerLaw(α::Tuple, breakpoints::Tuple)
BrokenPowerLaw{T}(A::Vector{T}, α::Vector{T}, breakpoints::Vector{T}) where {T}
An `AbstractIMF <: Distributions.ContinuousUnivariateDistribution` that describes a broken power-law IMF with probability distribution
```math
\\frac{dn(m)}{dm} = A \\times m^{-\\alpha}
```
that is defined piecewise with different normalizations `A` and power law slopes `α` in different mass ranges. The normalization constants `A` will be calculated automatically after you provide the power law slopes and break points.
# Arguments
- `α`; the power-law slopes of the different segments of the broken power law.
- `breakpoints`; the masses at which the power law slopes change. If `length(α)=n`, then `length(breakpoints)=n+1`.
# Examples
`BrokenPowerLaw([1.35,2.35],[0.08,1.0,Inf])` will instantiate a broken power law defined from a minimum mass of `0.08` to a maximum mass of `Inf` with a single switch in `α` at `m=1.0`. From `0.08 ≤ m ≤ 1.0`, `α = 1.35` and from `1.0 ≤ m ≤ Inf`, `α = 2.35`.
# Notes
There is some setup necessary for `quantile` and other derived methods, so it is more efficient to call these methods directly with an array via the call signature `quantile(d::BrokenPowerLaw{T}, x::AbstractArray{S})` rather than broadcasting over `x`. This behavior is now deprecated for `quantile(d::Distributions.UnivariateDistribution, X::AbstractArray)` in Distributions.jl.
# Methods
- `Base.convert(::Type{BrokenPowerLaw{T}}, d::BrokenPowerLaw)`
- `minimum(d::BrokenPowerLaw)`
- `maximum(d::BrokenPowerLaw)`
- `partype(d::BrokenPowerLaw)`
- `eltype(d::BrokenPowerLaw)`
- `mean(d::BrokenPowerLaw)`
- `median(d::BrokenPowerLaw)`
- `var(d::BrokenPowerLaw)`, may not function correctly for large `mmax`
- `skewness(d::BrokenPowerLaw)`, may not function correctly for large `mmax`
- `kurtosis(d::BrokenPowerLaw)`, may not function correctly for large `mmax`
- `pdf(d::BrokenPowerLaw,x::Real)`
- `logpdf(d::BrokenPowerLaw,x::Real)`
- `cdf(d::BrokenPowerLaw,x::Real)`
- `ccdf(d::BrokenPowerLaw,x::Real)`
- `quantile(d::BrokenPowerLaw{S},x::T) where {S,T<:Real}`
- `quantile!(result::AbstractArray,d::BrokenPowerLaw{S},x::AbstractArray{T}) where {S,T<:Real}`
- `quantile(d::BrokenPowerLaw{T},x::AbstractArray{S})`
- `cquantile(d::BrokenPowerLaw{S},x::T) where {S,T<:Real}`
- `rand(rng::AbstractRNG, d::BrokenPowerLaw,s...)`
- Other methods from `Distributions.jl` should also work because `BrokenPowerLaw <: AbstractIMF <: Distributions.ContinuousUnivariateDistribution`. For example, `rand!(rng::AbstractRNG, d::BrokenPowerLaw, x::AbstractArray)`.
"""
struct BrokenPowerLaw{T} <: AbstractIMF
A::Vector{T} # normalization parameters
α::Vector{T} # power law indexes
breakpoints::Vector{T} # bounds of each break
BrokenPowerLaw{T}(A::Vector{T}, α::Vector{T}, breakpoints::Vector{T}) where {T} =
new{T}(A,α,breakpoints)
end
function BrokenPowerLaw(α::Vector{T}, breakpoints::Vector{T}) where T <: Real
@assert length(breakpoints) == length(α)+1
@assert breakpoints[1] > 0
nbreaks = length(α)
A = Vector{T}(undef, nbreaks)
A[1] = one(T)
for i in 2:nbreaks
A[i] = A[i-1] * breakpoints[i]^-α[i-1] / breakpoints[i]^-α[i]
end
# Now A contains prefactors for each power law that makes them continuous
# with the first power law having a prefactor of 1. Now we need to normalize
# the integral from minimum(breaks) to maximum(breaks) to equal 1 by dividing
# the entire A array by a common factor.
total_integral = zero(T)
for i in 1:nbreaks
total_integral += pl_integral(A[i], α[i], breakpoints[i], breakpoints[i+1])
end
A ./= total_integral
return BrokenPowerLaw{T}(A, α, breakpoints)
end
BrokenPowerLaw(α::Tuple, breakpoints::Tuple) = BrokenPowerLaw(collect(promote(α...)), collect(promote(breakpoints...)))
BrokenPowerLaw(α::AbstractVector{T}, breakpoints::AbstractVector{T}) where T <: Real =
BrokenPowerLaw(convert(Vector{T}, α), convert(Vector{T}, breakpoints))
function BrokenPowerLaw(α::AbstractVector{T}, breakpoints::AbstractVector{S}) where {T <: Real, S <: Real}
X = promote_type(T, S)
return BrokenPowerLaw(convert(Vector{X},α), convert(Vector{X},breakpoints))
end
#### Conversions
Base.convert(::Type{BrokenPowerLaw{T}}, d::BrokenPowerLaw) where T =
BrokenPowerLaw{T}(convert(Vector{T},d.A), convert(Vector{T},d.α), convert(Vector{T},d.breakpoints))
Base.convert(::Type{BrokenPowerLaw{T}}, d::BrokenPowerLaw{T}) where T <: Real = d
#### Parameters
params(d::BrokenPowerLaw) = d.A, d.α, d.breakpoints
minimum(d::BrokenPowerLaw) = minimum(d.breakpoints)
maximum(d::BrokenPowerLaw) = maximum(d.breakpoints)
partype(d::BrokenPowerLaw{T}) where T = T
eltype(d::BrokenPowerLaw{T}) where T = T
#### Statistics
function mean(d::BrokenPowerLaw)
A, α, breakpoints = params(d)
return sum( (A[i]*breakpoints[i+1]^(2-α[i])/(2-α[i]) -
A[i]*breakpoints[i]^(2-α[i])/(2-α[i]) for i in 1:length(A) ) )
end
median(d::BrokenPowerLaw{T}) where T = quantile(d, T(0.5)) # this is temporary
# mode(d::BrokenPowerLaw) = d.breakpoints[argmin(d.α)] # this is not always correct
function var(d::BrokenPowerLaw)
A, α, breakpoints = params(d)
return sum( (A[i]*breakpoints[i+1]^(3-α[i])/(3-α[i]) -
A[i]*breakpoints[i]^(3-α[i])/(3-α[i]) for i in 1:length(A) ) ) - mean(d)^2
end
function skewness(d::BrokenPowerLaw)
A, α, breakpoints = params(d)
m = mean(d)
v = var(d)
return ( sum( (A[i]*breakpoints[i+1]^(4-α[i])/(4-α[i]) -
A[i]*breakpoints[i]^(4-α[i])/(4-α[i]) for i in 1:length(A) ) )
- 3 * m * v - m^3) / v^(3/2)
end
function kurtosis(d::BrokenPowerLaw)
A, α, breakpoints = params(d)
X4 = sum( (A[i]*breakpoints[i+1]^(5-α[i])/(5-α[i]) -
A[i]*breakpoints[i]^(5-α[i])/(5-α[i]) for i in 1:length(A) ) )
X3 = sum( (A[i]*breakpoints[i+1]^(4-α[i])/(4-α[i]) -
A[i]*breakpoints[i]^(4-α[i])/(4-α[i]) for i in 1:length(A) ) )
X2 = sum( (A[i]*breakpoints[i+1]^(3-α[i])/(3-α[i]) -
A[i]*breakpoints[i]^(3-α[i])/(3-α[i]) for i in 1:length(A) ) )
X = sum( (A[i]*breakpoints[i+1]^(2-α[i])/(2-α[i]) -
A[i]*breakpoints[i]^(2-α[i])/(2-α[i]) for i in 1:length(A) ) )
μ4 = X4 - 4*X*X3 + 6*X^2*X2 - 3*X^4
return μ4 / var(d)^2
end
#### Evaluation
function pdf(d::BrokenPowerLaw{S}, x::T) where {S, T <: Real}
((x < minimum(d)) || (x > maximum(d))) && (return zero(promote_type(S,T)))
idx = findfirst(>=(x), d.breakpoints)
idx != 1 && (idx-=1)
return d.A[idx] * x^-d.α[idx]
end
function logpdf(d::BrokenPowerLaw{S}, x::T) where {S, T <: Real}
if ((x >= minimum(d)) && (x <= maximum(d)))
A, α, breakpoints = params(d)
idx = findfirst(>=(x), breakpoints)
idx != 1 && (idx-=1)
return log(A[idx]) - α[idx]*log(x)
else
U = promote_type(S, T)
return -U(Inf)
end
end
function cdf(d::BrokenPowerLaw{S}, x::T) where {S, T <: Real}
U = promote_type(S, T)
if x <= minimum(d)
return zero(U)
elseif x >= maximum(d)
return one(U)
end
A,α,breakpoints = params(d)
idx = findfirst(>=(x),breakpoints)
idx != 1 && (idx-=1)
return sum(pl_integral(A[i],α[i],breakpoints[i],min(x,breakpoints[i+1])) for i in 1:idx)
end
ccdf(d::BrokenPowerLaw, x::Real) = one(partype(d)) - cdf(d,x)
function quantile(d::BrokenPowerLaw{S}, x::T) where {S, T <: Real}
U = promote_type(S, T)
if x <= zero(T)
return U(minimum(d))
elseif x >= one(T)
return U(maximum(d))
end
A, α, breakpoints = params(d)
nbreaks = length(A)
integrals = cumsum(pl_integral(A[i],α[i],breakpoints[i],breakpoints[i+1]) for i in 1:nbreaks) # calculate the cumulative integral
idx = findfirst(>=(x), integrals) # find the first breakpoint where the cumulative integral # up to each breakpoint
idx != 1 && (x-=integrals[idx-1]) # is greater than x. If this is not the first breakpoint, then subtract off the cumulative integral
a = one(S) - α[idx]
return (x*a/A[idx] + breakpoints[idx]^a)^inv(a)
end
function quantile!(result::AbstractArray, d::BrokenPowerLaw{S}, x::AbstractArray{T}) where {S, T <: Real}
@assert axes(result) == axes(x)
A, α, breakpoints = params(d)
nbreaks = length(A)
integrals = cumsum(pl_integral(A[i],α[i],breakpoints[i],breakpoints[i+1]) for i in 1:nbreaks) # calculate the cumulative integral
@inbounds for i in eachindex(x)
xi = x[i]
xi<=zero(T) && (result[i]=minimum(d); continue)
xi>=one(T) && (result[i]=maximum(d); continue)
idx = findfirst(>=(xi), integrals) # find the first breakpoint where the cumulative integral # up to each breakpoint
idx != 1 && (xi-=integrals[idx-1]) # is greater than x. If this is not the first breakpoint, then subtract off
a = one(S) - α[idx] # the cumulative integral
result[i] = (xi*a/A[idx] + breakpoints[idx]^a)^inv(a)
end
return result
end
quantile(d::BrokenPowerLaw{T}, x::AbstractArray{S}) where {T, S <: Real} =
quantile!(Array{promote_type(T,S)}(undef,size(x)),d,x)
cquantile(d::BrokenPowerLaw, x::Real) = quantile(d, one(x)-x)
##### Random sampling
##########################################################################################
# Implementing efficient sampler. We need the cumulative integral up to each breakpoint in
# order to transform a uniform random point in (0,1] to a random mass, but we dont want to
# recompute it every time. We could just use the method of
# quantile(d::BrokenPowerLaw, x::AbstractArray)
# for this purpose but the "correct" way to do it is to implement a `sampler` method as below.
# By default (e.g., without `rand(rng::AbstractRNG, d::BrokenPowerLaw)`),
# Distributions seems to call the efficient
# `quantile(d::BrokenPowerLaw, x::AbstractArray)` method anyway, but it doesn't respect
# the type of `d`, so a bit better to do it this way anyway.
struct BPLSampler{T} <: Sampleable{Univariate, Continuous}
A::Vector{T} # normalization parameters
α::Vector{T} # power law indexes
breakpoints::Vector{T} # bounds of each break
integrals::Vector{T} # cumulative integral up to each breakpoint
end
function BPLSampler(d::BrokenPowerLaw)
A, α, breakpoints = params(d)
nbreaks = length(A)
integrals = cumsum(pl_integral(A[i],α[i],breakpoints[i],breakpoints[i+1]) for i in 1:nbreaks) # calculate the cumulative integral
return BPLSampler(A, α, breakpoints, integrals)
end
function rand(rng::AbstractRNG, s::BPLSampler{T}) where T
x = rand(rng, T)
A, α, breakpoints,integrals = s.A,s.α,s.breakpoints,s.integrals
idx = findfirst(>=(x), integrals) # find the first breakpoint where the cumulative integral # up to each breakpoint
idx != 1 && (x-=integrals[idx-1]) # is greater than x. If this is not the first breakpoint, then subtract off the cumulative integral
a = one(T) - α[idx]
return (x*a/A[idx] + breakpoints[idx]^a)^inv(a)
end
sampler(d::BrokenPowerLaw) = BPLSampler(d)
rand(rng::AbstractRNG, d::BrokenPowerLaw) = rand(rng, sampler(d))
#######################################################
# Specific types of BrokenPowerLaw
#######################################################
const kroupa2001_α = [0.3, 1.3, 2.3]
const kroupa2001_breakpoints = [0.0, 0.08, 0.50, Inf]
"""
Kroupa2001(mmin::Real=0.08, mmax::Real=Inf)
Function to instantiate a [`BrokenPowerLaw`](@ref) IMF for single stars with the parameters from Equation 2 of [Kroupa 2001](https://ui.adsabs.harvard.edu/abs/2001MNRAS.322..231K/abstract). This is equivalent to the relation given in [Kroupa 2002](https://ui.adsabs.harvard.edu/abs/2002Sci...295...82K/abstract).
"""
function Kroupa2001(mmin::T=0.08, mmax::T=Inf) where T <: Real
@assert mmin > zero(T)
# idx1 = max(1, findfirst(>(mmin),kroupa2001_breakpoints)-1)
idx1 = findfirst(>(mmin), kroupa2001_breakpoints) - 1
idx2 = findfirst(>=(mmax), kroupa2001_breakpoints)
bp = convert(Vector{T}, kroupa2001_breakpoints[idx1:idx2])
bp[1] = mmin
bp[end] = mmax
BrokenPowerLaw(convert(Vector{T}, kroupa2001_α[idx1:idx2-1]), bp)
end
Kroupa2001(mmin::Real, mmax::Real) = Kroupa2001(promote(mmin,mmax)...)
const chabrier2001bpl_α = [1.55, 2.70]
const chabrier2001bpl_breakpoints = [0.00, 1.0, Inf]
"""
Chabrier2001BPL(mmin::T=0.08, mmax::T=Inf)
Function to instantiate a [`BrokenPowerLaw`](@ref) IMF for single stars with the parameters from the first column of Table 1 in [Chabrier 2001](https://ui.adsabs.harvard.edu/abs/2001ApJ...554.1274C/abstract).
"""
function Chabrier2001BPL(mmin::T=0.08, mmax::T=Inf) where {T<:Real}
@assert mmin > 0
# idx1 = max(1, findfirst(>(mmin),kroupa2001_breakpoints)-1)
idx1 = findfirst(>(mmin), chabrier2001bpl_breakpoints) - 1
idx2 = findfirst(>=(mmax), chabrier2001bpl_breakpoints)
bp = convert(Vector{T}, chabrier2001bpl_breakpoints[idx1:idx2])
bp[1] = mmin
bp[end] = mmax
BrokenPowerLaw(convert(Vector{T}, chabrier2001bpl_α[idx1:idx2-1]), bp)
end
Chabrier2001BPL(mmin::Real, mmax::Real) = Chabrier2001BPL(promote(mmin,mmax)...)
| InitialMassFunctions | https://github.com/cgarling/InitialMassFunctions.jl.git |
|
[
"MIT"
] | 0.1.4 | ea00ee2ef140aa82b63037bef01e1baf91cdaa4d | code | 11594 | using InitialMassFunctions
using QuadGK
using Test
function test_bpl(d::BrokenPowerLaw)
mmin,mmax = extrema(d)
integral = quadgk(x->pdf(d,x),mmin,mmax) # test that the pdf is properly normalized
# integral = quadgk(x->exp(x)*pdf(d,exp(x)),log(mmin),log(mmax))
@test integral[1] ≈ oneunit(integral[1]) rtol=1e-12 atol=integral[2] # test normalization
meanmass_gk = quadgk(x->x*pdf(d,x),mmin,mmax)
# meanmass_gk = quadgk(x->exp(x)^2*pdf(d,exp(x)),log(mmin),log(mmax))
@test meanmass_gk[1] ≈ mean(d) rtol=1e-12 atol=meanmass_gk[2] # test mean
var_gk = quadgk(x->x^2*pdf(d,x),mmin,mmax)
@test var_gk[1]-mean(d)^2 ≈ var(d) rtol=1e-12 atol=var_gk[2] # test variance
skew_gk = quadgk(x->x^3*pdf(d,x),mmin,mmax)
@test (skew_gk[1]-3*mean(d)*var(d)-mean(d)^3)/var(d)^(3/2) ≈ skewness(d) rtol=1e-5 atol=var_gk[2] # test skewness; higher error
dmean = mean(d)
var_d = var(d)
kurt_gk = quadgk(x->(x-dmean)^4 / var_d^2 * pdf(d,x),mmin,mmax)
@test kurt_gk[1] ≈ kurtosis(d) rtol=1e-10 atol=kurt_gk[2] # test kurtosis
# test correctness of CDF, quantile, etc.
test_points = [0.1,0.2,0.3,1.0,1.5,10.0,100.0]
test_points = test_points[mmin .< test_points .< mmax]
for i in test_points
x = cdf(d,i)
integral = quadgk(x->pdf(d,x),mmin,i)
@test integral[1] ≈ x rtol=1e-12 atol=integral[2]
@test quantile(d,x) ≈ i rtol=1e-12
@test cquantile(d,x) ≈ quantile(d,1-x) rtol=1e-12
@test logpdf(d,i) ≈ log(pdf(d,i)) rtol=1e-12
@test x ≈ 1 - ccdf(d,i) rtol=1e-12
end
end
function test_lognormalbpl(d::LogNormalBPL)
mmin,mmax = extrema(d)
integral = quadgk(x->pdf(d,x),mmin,mmax)
# integral = quadgk(x->exp(x)*pdf(d,exp(x)),log(mmin),log(mmax))
@test integral[1] ≈ oneunit(integral[1]) rtol=1e-12 atol=integral[2] # test normalization
meanmass_gk = quadgk(x->x*pdf(d,x),mmin,mmax)
# meanmass_gk = quadgk(x->exp(x)^2*pdf(d,exp(x)),log(mmin),log(mmax))
@test meanmass_gk[1] ≈ mean(d) rtol=1e-12 atol=meanmass_gk[2] # test mean
# these are not defined yet for LogNormalBPL
# var_gk = quadgk(x->x^2*pdf(d,x),mmin,mmax)
# @test var_gk[1]-mean(d)^2 ≈ var(d) rtol=1e-12 atol=var_gk[2] # test variance
# skew_gk = quadgk(x->x^3*pdf(d,x),mmin,mmax)
# @test (skew_gk[1]-3*mean(d)*var(d)-mean(d)^3)/var(d)^(3/2) ≈ skewness(d) rtol=1e-5 atol=var_gk[2] # test skewness; higher error
# dmean = mean(d)
# var_d = var(d)
# kurt_gk = quadgk(x->(x-dmean)^4 / var_d^2 * pdf(d,x),mmin,mmax)
# @test kurt_gk[1] ≈ kurtosis(d) rtol=1e-10 atol=kurt_gk[2] # test kurtosis
# test correctness of CDF, quantile, etc.
test_points = [0.1,0.2,0.3,1.0,1.5,10.0,100.0]
test_points = test_points[mmin .< test_points .< mmax]
for i in test_points
x = cdf(d,i)
integral = quadgk(x->pdf(d,x),mmin,i)
@test integral[1] ≈ x rtol=1e-12 atol=integral[2]
@test quantile(d,x) ≈ i rtol=1e-12
@test cquantile(d,x) ≈ quantile(d,1-x) rtol=1e-12
@test logpdf(d,i) ≈ log(pdf(d,i)) rtol=1e-12
@test x ≈ 1 - ccdf(d,i) rtol=1e-12
end
end
# function test_type(T::Type,d::BrokenPowerLaw)
# @test d isa BrokenPowerLaw{T}
# @test partype(d) == T
# @test convert(BrokenPowerLaw{Float32},d) isa BrokenPowerLaw{Float32}
# @test convert(BrokenPowerLaw{T},d) === d
# end
# include("single_powerlaw.jl")
@testset "BrokenPowerLaw" begin
@testset "Float64" begin
d = BrokenPowerLaw([1.3,2.35],[0.08,1.0,100.0])
@test d isa BrokenPowerLaw{Float64}
@test partype(d) == Float64
@test convert(BrokenPowerLaw{Float32},d) isa BrokenPowerLaw{Float32}
@test convert(BrokenPowerLaw{Float64},d) === d
@test pdf(d,1.0) isa Float64
@test pdf(d,1.0f0) isa Float64
@test pdf(d, -1.0) === 0.0 # Test out of range inputs
@test pdf(d, -1.0f0) === 0.0 # Test out of range inputs
@test pdf(d, 1e3) === 0.0 # Test out of range inputs
@test pdf(d, 1f3) === 0.0 # Test out of range inputs
@test logpdf(d,1.0) isa Float64
@test logpdf(d,1.0f0) isa Float64
@test logpdf(d, -1.0) === -Inf # Test out of range inputs
@test logpdf(d, -1.0f0) === -Inf # Test out of range inputs
@test logpdf(d, 1e3) === -Inf # Test out of range inputs
@test logpdf(d, 1f3) === -Inf # Test out of range inputs
@test cdf(d,1.0) isa Float64
@test cdf(d,1.0f0) isa Float64
@test cdf(d, -1.0) === 0.0 # Test out of range inputs
@test cdf(d, -1.0f0) === 0.0 # Test out of range inputs
@test cdf(d, 1e3) === 1.0 # Test out of range inputs
@test cdf(d, 1f3) === 1.0 # Test out of range inputs
@test ccdf(d,1.0) isa Float64
@test ccdf(d,1.0f0) isa Float64
@test ccdf(d, -1.0) === 1.0 # Test out of range inputs
@test ccdf(d, -1.0f0) === 1.0 # Test out of range inputs
@test ccdf(d, 1e3) === 0.0 # Test out of range inputs
@test ccdf(d, 1f3) === 0.0 # Test out of range inputs
@test quantile(d,0.5) isa Float64
@test quantile(d,0.5f0) isa Float64
@test quantile(d,1.2) === Float64(maximum(d)) # Test out of range inputs
@test quantile(d,1.2f0) === maximum(d) # Test out of range inputs
@test quantile(d,-1.0) === Float64(minimum(d)) # Test out of range inputs
@test quantile(d,-1.0f0) === minimum(d) # Test out of range inputs
@test quantile(d,[0.5f0,0.75f0]) isa Vector{Float64}
@test quantile(d,[0.5,0.75]) isa Vector{Float64}
@test cquantile(d,0.5) isa Float64
@test cquantile(d,0.5f0) isa Float64
@test rand(d) isa Float64
@test mean(d) isa Float64
@test median(d) isa Float64
end
@testset "Float32" begin
d = BrokenPowerLaw([1.3f0,2.35f0],[0.08f0,1.0f0,100.0f0])
@test d isa BrokenPowerLaw{Float32}
@test partype(d) == Float32
@test convert(BrokenPowerLaw{Float32},d) === d
@test convert(BrokenPowerLaw{Float64},d) isa BrokenPowerLaw{Float64}
@test pdf(d,1.0) isa Float64
@test pdf(d,1.0f0) isa Float32
@test pdf(d, -1.0) === 0.0 # Test out of range inputs
@test pdf(d, -1.0f0) === 0.0f0 # Test out of range inputs
@test pdf(d, 1e3) === 0.0 # Test out of range inputs
@test pdf(d, 1f3) === 0.0f0 # Test out of range inputs
@test logpdf(d,1.0) isa Float64
@test logpdf(d,1.0f0) isa Float32
@test logpdf(d, -1.0) === -Inf # Test out of range inputs
@test logpdf(d, -1.0f0) === -Inf32 # Test out of range inputs
@test logpdf(d, 1e3) === -Inf # Test out of range inputs
@test logpdf(d, 1f3) === -Inf32 # Test out of range inputs
@test cdf(d,1.0) isa Float64
@test cdf(d,1.0f0) isa Float32
@test cdf(d, -1.0) === 0.0 # Test out of range inputs
@test cdf(d, -1.0f0) === 0.0f0 # Test out of range inputs
@test cdf(d, 1e3) === 1.0 # Test out of range inputs
@test cdf(d, 1f3) === 1.0f0 # Test out of range inputs
@test ccdf(d,1.0) isa Float64
@test ccdf(d,1.0f0) isa Float32
@test ccdf(d, -1.0) === 1.0 # Test out of range inputs
@test ccdf(d, -1.0f0) === 1.0f0 # Test out of range inputs
@test ccdf(d, 1e3) === 0.0 # Test out of range inputs
@test ccdf(d, 1f3) === 0.0f0 # Test out of range inputs
@test quantile(d,0.5) isa Float64
@test quantile(d,0.5f0) isa Float32
@test quantile(d,1.2) === Float64(maximum(d)) # Test out of range inputs
@test quantile(d,1.2f0) === maximum(d) # Test out of range inputs
@test quantile(d,-1.0) === Float64(minimum(d)) # Test out of range inputs
@test quantile(d,-1.0f0) === minimum(d) # Test out of range inputs
@test quantile(d,[0.5f0,0.75f0]) isa Vector{Float32}
@test quantile(d,[0.5,0.75]) isa Vector{Float64}
@test cquantile(d,0.5) isa Float64
@test cquantile(d,0.5f0) isa Float32
@test rand(d) isa Float32
@test mean(d) isa Float32
@test median(d) isa Float32
end
@testset "Params" begin
d = BrokenPowerLaw([1.3,2.35],[0.08,1.0,100.0])
@test minimum(d) == 0.08
@test maximum(d) == 100.0
end
# test tuple constructor
d = BrokenPowerLaw((1.3,2.35),(0.08,1.0,100.0))
@test BrokenPowerLaw((1.3f0,2.35f0),(0.08,1.0,100.0)) isa BrokenPowerLaw{Float64}
@test BrokenPowerLaw((1.3f0,2.35f0),(0.08f0,1.0f0,100.0f0)) isa BrokenPowerLaw{Float32}
# test named types of BPLs
@testset "Chabrier2001BPL" begin
d = Chabrier2001BPL(0.08,100.0)
test_bpl(d)
end
@testset "Kroupa2001" begin
d = Kroupa2001(0.08,100.0)
test_bpl(d)
end
end
##############################################################################
@testset "LogNormalBPL" begin
@testset "Float64" begin
d = LogNormalBPL(-5.0,1.5,[2.35],[0.08,1.0,100.0])
@test d isa LogNormalBPL{Float64}
@test partype(d) == Float64
@test convert(LogNormalBPL{Float32},d) isa LogNormalBPL{Float32}
@test convert(LogNormalBPL{Float64},d) === d
@test pdf(d,1.0) isa Float64
@test pdf(d,1.0f0) isa Float64
@test logpdf(d,1.0) isa Float64
@test logpdf(d,1.0f0) isa Float64
@test cdf(d,1.0) isa Float64
@test cdf(d,1.0f0) isa Float64
@test ccdf(d,1.0) isa Float64
@test ccdf(d,1.0f0) isa Float64
@test quantile(d,0.5) isa Float64
@test quantile(d,0.5f0) isa Float64
@test cquantile(d,0.5) isa Float64
@test cquantile(d,0.5f0) isa Float64
@test quantile(d,[0.5f0,0.75f0]) isa Vector{Float64}
@test quantile(d,[0.5,0.75]) isa Vector{Float64}
@test rand(d) isa Float64
@test mean(d) isa Float64
end
@testset "Float32" begin
d = LogNormalBPL(-5.0f0,1.5f0,[2.35f0],[0.08f0,1.0f0,100.0f0])
@test d isa LogNormalBPL{Float32}
@test partype(d) == Float32
@test convert(LogNormalBPL{Float32},d) === d
@test convert(LogNormalBPL{Float64},d) isa LogNormalBPL{Float64}
@test pdf(d,1.0) isa Float64
@test pdf(d,1.0f0) isa Float32
@test logpdf(d,1.0) isa Float64
@test logpdf(d,1.0f0) isa Float32
@test cdf(d,1.0) isa Float64
@test cdf(d,1.0f0) isa Float32
@test ccdf(d,1.0) isa Float64
@test ccdf(d,1.0f0) isa Float32
@test quantile(d,0.5) isa Float64
@test quantile(d,0.5f0) isa Float32
@test quantile(d,[0.5f0,0.75f0]) isa Vector{Float32}
@test quantile(d,[0.5,0.75]) isa Vector{Float64}
@test rand(d) isa Float32
@test mean(d) isa Float32
end
@testset "Params" begin
d = LogNormalBPL(-5.0,1.5,[2.35],[0.08,1.0,100.0])
@test minimum(d) == 0.08
@test maximum(d) == 100.0
end
# test tuple constructor
d = LogNormalBPL(-5.0,1.5,(2.35,),(0.08,1.0,100.0))
@test LogNormalBPL(-5.0f0,1.5f0,(2.35f0,),(0.08,1.0,100.0)) isa LogNormalBPL{Float64}
@test LogNormalBPL(-5.0,1.5f0,(2.35f0,),(0.08f0,1.0f0,100.0f0)) isa LogNormalBPL{Float64}
# test named types of BPLs
@testset "Chabrier2003" begin
d = Chabrier2003(0.08,100.0)
test_lognormalbpl(d)
end
@testset "Chabrier2003System" begin
d = Chabrier2003System(0.08,100.0)
test_lognormalbpl(d)
end
end
| InitialMassFunctions | https://github.com/cgarling/InitialMassFunctions.jl.git |
|
[
"MIT"
] | 0.1.4 | ea00ee2ef140aa82b63037bef01e1baf91cdaa4d | docs | 3803 | InitialMassFunctions.jl
================
[](https://github.com/cgarling/InitialMassFunctions.jl/actions)
[](https://cgarling.github.io/InitialMassFunctions.jl/stable/)
[](https://cgarling.github.io/InitialMassFunctions.jl/dev/)
[](https://codecov.io/gh/cgarling/InitialMassFunctions.jl)
Stellar initial mass functions describe the distribution of initial masses that stars are born with. This package aims to implement and provide interfaces for working with initial mass functions, including but not limited to evaluating and sampling from published distributions. See the linked documentation above for more details.
Published IMFs we include are
```julia
Salpeter1955(mmin::Real=0.4, mmax::Real=Inf)
Chabrier2001BPL(mmin::Real=0.08, mmax::Real=Inf)
Chabrier2001LogNormal(mmin::Real=0.08, mmax::Real=Inf)
Chabrier2003(mmin::Real=0.08, mmax::Real=Inf)
Chabrier2003System(mmin::Real=0.08, mmax::Real=Inf)
Kroupa2001(mmin::Real=0.08, mmax::Real=Inf)
```
These all return subtypes of [`Distributions.ContinuousUnivariateDistribution`](https://juliastats.org/Distributions.jl/latest/univariate/#univariates) and have many of the typical methods from [`Distributions.jl`](https://github.com/JuliaStats/Distributions.jl) defined for them. These include
* pdf, logpdf
* cdf, ccdf
* `quantile(d, x::Real), quantile(d,x::AbstractArray), quantile!(y::AbstractArray, d, x::AbstractArray)`
* rand
* minimum, maximum, extrema
* mean, median, var, skewness, kurtosis (some of the higher moments don't work for `mmax=Inf` with instances of `BrokenPowerLaw`)
Note that `var`, `skewness`, and `kurtosis` are not currently defined for instances of `LogNormalBPL`, such as those returned by the `Chabrier2003` function.
Many other functions that work on [`Distributions.ContinuousUnivariateDistribution`](https://juliastats.org/Distributions.jl/latest/univariate/#univariates) will also work transparently on these `AbstractIMF` instances.
Continuous broken-power-law distributions (such as those used in [Chabrier 2001](https://ui.adsabs.harvard.edu/abs/2001ApJ...554.1274C/abstract) and [Kroupa 2001](https://ui.adsabs.harvard.edu/abs/2001MNRAS.322..231K/abstract)) are provided through a new `BrokenPowerLaw` type. The lognormal distribution for masses `m<1.0` with a power law extension for higher masses as given in [Chabrier 2003](https://ui.adsabs.harvard.edu/abs/2003PASP..115..763C/abstract) is provided by a new `LogNormalBPL` type. Simpler models (e.g., `Salpeter1955` and `Chabrier2001LogNormal`) are implemented using built-in distributions from [`Distributions.jl`](https://github.com/JuliaStats/Distributions.jl) in concert with their [`truncated`](https://juliastats.org/Distributions.jl/stable/truncate/#Distributions.truncated) function.
Efficient samplers are implemented for the new types `BrokenPowerLaw` and `LogNormalBPL` such that batched calls (e.g., `rand(Chabrier2003(),1000)`) are more efficient than single calls (e.g., `d=Chabrier2003(); [rand(d) for i in 1:1000]`). These samplers can be created explicitly by calling `Distributions.sampler(d)`, with `d` being a `BrokenPowerLaw` or `LogNormalBPL` instance.
## Versioning
`InitialMassFunctions.jl` follows Julia's [recommended versioning strategy](https://pkgdocs.julialang.org/v1/compatibility/#compat-pre-1.0), where breaking changes prior to version 1.0 will result in a bump of the minor version (e.g., 0.1.x |> 0.2.0) whereas feature additions and bug patches will increment the patch version (0.1.0 |> 0.1.1). | InitialMassFunctions | https://github.com/cgarling/InitialMassFunctions.jl.git |
|
[
"MIT"
] | 0.1.4 | ea00ee2ef140aa82b63037bef01e1baf91cdaa4d | docs | 3415 | # Convenience Constructors for Published IMFs
We provide convenience constructors for published IMFs that can be called without arguments, or with positional arguments to set different minimum and maximum stellar masses. The provided constructors are
```@docs
Salpeter1955
Chabrier2001BPL
Kroupa2001
Chabrier2001LogNormal
Chabrier2003
Chabrier2003System
```
```@setup pdfcompare
# @setup ensures input and output are hidden from final output
using Distributions, InitialMassFunctions
import GR
# Display figures as SVGs
GR.inline("svg")
# Write function to make comparison plot of provided literature IMF PDFs
function pdfcompare(Mmin, Mmax, npoints; kws...)
GR.figure(;
title="Literature IMF PDFs",
xlabel="M\$_\\odot\$",
ylabel="PDF (dN/dM)",
xlog=true,
ylog=true,
grid=false,
backgroundcolor=0, # white instead of transparent background for dark Documenter scheme
# font="Helvetica_Regular", # work around https://github.com/JuliaPlots/Plots.jl/issues/2596
linewidth=2.0, # thicker lines
size=(800,600),
xlim=(Mmin, 10.0),
ylim=(1e-4,20.0),
kws...)
imfs = [Salpeter1955(Mmin, Mmax),
Chabrier2001BPL(Mmin, Mmax),
Kroupa2001(Mmin, Mmax),
Chabrier2001LogNormal(Mmin, Mmax),
Chabrier2003(Mmin, Mmax),
Chabrier2003System(Mmin, Mmax)]
imf_labels = ["Salpeter1955",
"Chabrier2001BPL",
"Kroupa2001",
"Chabrier2001LogNormal",
"Chabrier2003",
"Chabrier2003System"]
masses = exp10.(range(log10(Mmin), log10(Mmax); length=npoints))
pdfs = [pdf.(imf, masses) for imf in imfs]
# # Normalization test
# using Test
# import QuadGK: quadgk
# let Mmin = 0.08, Mmax = 100.0,
# imfs = [Chabrier2001BPL(Mmin, Mmax),
# Kroupa2001(Mmin, Mmax),
# Chabrier2001LogNormal(Mmin, Mmax),
# Chabrier2003(Mmin, Mmax),
# Chabrier2003System(Mmin, Mmax)]
# @test all( isapprox.([quadgk(Base.Fix1(pdf, imf), Mmin, Mmax)[1] for imf in imfs],1) )
# end
# # Plot first IMF
# GR.plot(masses, Base.Fix1(pdf,Salpeter1955(Mmin, Mmax)))
# # Hold open
# GR.hold(true)
# for imf in imfs
# GR.plot(masses, Base.Fix1(pdf, imf))
# end
# GR.hold(false)
# # GR.legend(imf_labels...)
# Switch to plotting all lines at same time, easier for legend
# Call is GR.plot(x1, y1, x2, y2, ...and so on)
# Legend labels are then keyword `labels` and legend location is
# `location` with 1 = upper right, 2 = upper left,
# 3 = lower left, 4 = lower right, 11 = top right (outside of axes),
# 12 = half right (outside of axes), 13 = bottom right (outside of axes),
GR.plot( (collect((masses, i) for i in pdfs)...)...,
labels=imf_labels, location=3)
end
```
Below is a comparison plot of the probability density functions of the literature IMFs we provide, all normalized to integrate to 1 over the initial mass range (0.08, 100.0) solar masses.
```@example pdfcompare
pdfcompare(0.08, 100.0, 1000) # hide
```
\
We also provide a constructor for a single-power-law IMF,
```@docs
PowerLawIMF
```
which internally creates a truncated [Pareto](https://juliastats.org/Distributions.jl/stable/univariate/#Distributions.Pareto) distribution. Similarly, we provide a constructor for a single-component `LogNormal` IMF,
```@docs
LogNormalIMF
``` | InitialMassFunctions | https://github.com/cgarling/InitialMassFunctions.jl.git |
|
[
"MIT"
] | 0.1.4 | ea00ee2ef140aa82b63037bef01e1baf91cdaa4d | docs | 415 | # Defined Types
We provide two new types that are subtypes of [`AbstractIMF`](@ref InitialMassFunctions.AbstractIMF), which itself is a subtype of [`Distributions.ContinuousUnivariateDistribution`](https://juliastats.org/Distributions.jl/latest/univariate/#univariates) and generally follow the API provided by `Distributions.jl`. These are
```@docs
InitialMassFunctions.AbstractIMF
BrokenPowerLaw
LogNormalBPL
``` | InitialMassFunctions | https://github.com/cgarling/InitialMassFunctions.jl.git |
|
[
"MIT"
] | 0.1.4 | ea00ee2ef140aa82b63037bef01e1baf91cdaa4d | docs | 98 | # Utilities
```@docs
InitialMassFunctions.pl_integral
InitialMassFunctions.lognormal_integral
``` | InitialMassFunctions | https://github.com/cgarling/InitialMassFunctions.jl.git |
|
[
"MIT"
] | 0.3.0 | 724d117af1440b595a40daaff70dcf98b4a06b51 | code | 591 | push!(LOAD_PATH, "../src/")
using Documenter, CEDICT
makedocs(
sitename="CEDICT.jl Documentation",
format=Documenter.HTML(
prettyurls=get(ENV, "CI", nothing) == "true"
),
modules=[CEDICT],
pages=[
"Home" => "index.md",
"API Reference" => [
"Loading Dictionaries" => "api_dictionaries.md",
"Searching in Dictionaries" => "api_searching.md",
"Convenience Functions" => "api_convenience.md"
]
]
)
deploydocs(
repo="github.com/JuliaCJK/CEDICT.jl.git",
devbranch="main",
devurl="latest"
)
| CEDICT | https://github.com/JuliaCJK/CEDICT.jl.git |
|
[
"MIT"
] | 0.3.0 | 724d117af1440b595a40daaff70dcf98b4a06b51 | code | 43695 | ### A Pluto.jl notebook ###
# v0.15.0
using Markdown
using InteractiveUtils
# This Pluto notebook uses @bind for interactivity. When running this notebook outside of Pluto, the following 'mock version' of @bind gives bound variables a default value (instead of an error).
macro bind(def, element)
quote
local el = $(esc(element))
global $(esc(def)) = Core.applicable(Base.get, el) ? Base.get(el) : missing
el
end
end
# ╔═╡ ada79ea0-f796-11ea-1aa2-eddd747b2c83
begin
using PlutoUI
using CEDICT
using LightGraphs
using Plots, GraphPlot
end
# ╔═╡ fc1f2d1e-f799-11ea-2606-f5046723c160
using LongestPaths
# ╔═╡ 6269f9b0-f778-11ea-2004-394d3bf5ff7c
md"""
# Making 成語接龍
## Idiom Source
Here, I've just filtered the CEDICT for only entries that have "(idiom)" in one of the definitions.
"""
# ╔═╡ 89e768e0-f775-11ea-02b9-5b07cf88d0b4
dict = ChineseDictionary();
# ╔═╡ 04c116f0-f77c-11ea-305e-cf13b0d83a8f
trad_idioms = [entry.trad for entry in idioms(dict)]
# ╔═╡ afabceb0-f778-11ea-0b9a-bb3cbcdf2836
md"""
## Creating the Graph
Now, to make the computations easier, I'm going to represent all the 成語 that I have in a graph. We could make the graph have an edge between idiom A and idiom B if the last character in A is the first character in B. There can be different ideas of being the "same": being the exact same character, have the exact same pronounciation, having the same primary pronounciation but a different tone, etc.
But this graph is awkward to create (to add an edge from an idiom, I need to scan through all the other **terminal characters** (ones either at the beginning or the end of idioms) to figure out which ones to add as destinations. This would be a slow $O(n^2)$ algorithm. Instead, the vertices will represent terminal characters, and an idiom is represented as an edge from its starting character to its ending character.
Because LightGraphs is optimized for integer-like vertices, first we'll create some mappings between the traditional terminal characters and the first $n$ positive natural numbers (where $n$ is the number of these characters).
"""
# ╔═╡ d62e2d50-f77b-11ea-0f69-ab7371ff50d6
terminal_chars = union(
# initials
Set(first(idiom) for idiom in trad_idioms),
# finals
Set(last(idiom) for idiom in trad_idioms)
)
# ╔═╡ e94d86d0-f783-11ea-0e6c-5d774a41d58c
num2term = collect(terminal_chars)
# ╔═╡ ef8a82f0-f783-11ea-3e6c-d5591033c789
term2num = Dict(term_char => UInt16(index) for (index, term_char) in pairs(num2term))
# ╔═╡ 85e6c400-f77c-11ea-27f5-e7f9846cadcb
let num_idioms = length(trad_idioms), num_term_chars = length(terminal_chars)
md"""
*Brief aside*: It's a little interesting to me that even though there are $num_idioms idioms in the dictionary, there are only $num_term_chars total terminal characters. That means on average each terminal character is repeated $(round(2*num_idioms/num_term_chars; digits=2)) times.
"""
end
# ╔═╡ 021f2dce-f77a-11ea-3bc3-1770604ce10a
idiom_graph = SimpleDiGraph(UInt16(length(terminal_chars)));
# ╔═╡ 3417ba30-f786-11ea-2d30-a9df1f9ca6a0
edge_idioms = Dict{Tuple{Char, Char}, Vector{String}}();
# ╔═╡ c670e330-f776-11ea-22c6-598dabb22ca4
for idiom in trad_idioms
edge = (first(idiom), last(idiom))
add_edge!(idiom_graph, term2num[first(edge)], term2num[last(edge)])
if haskey(edge_idioms, edge)
push!(edge_idioms[edge], idiom)
else
edge_idioms[edge] = [idiom]
end
end
# ╔═╡ dd133a30-f784-11ea-1d08-c9224043fcd8
md"""
Now to use the graph to access information about the connectedness of different idioms, we can do things like query the neighbors (converting between our graph vertices as needed).
"""
# ╔═╡ 9d70c340-f787-11ea-3b52-4302490de06a
initial = '叫';
# ╔═╡ e7634a5e-f780-11ea-1bab-8308a9188fd7
initial_outneighbors = num2term[outneighbors(idiom_graph, term2num[initial])]
# ╔═╡ 0a65dba2-f785-11ea-2112-ad09bee3f657
md"""
Here, this means that there are idioms that start with 叫 and end with either 天 or 迭. To see which ones, we'll have to query the `edge_idioms` dictionary.
"""
# ╔═╡ a1395970-f786-11ea-3273-c3cabe974073
[edge_idioms[(initial, final)] for final in initial_outneighbors]
# ╔═╡ 863059b0-f788-11ea-1466-173304238a3c
md"""
So there are two idioms for each final terminal character in this case.
We can also figure out what idioms end with that character.
"""
# ╔═╡ d8490b20-f788-11ea-0dc4-f31128f26958
initial_inneighbors = num2term[inneighbors(idiom_graph, term2num[initial])]
# ╔═╡ edda218e-f788-11ea-087e-63329996b830
[edge_idioms[(final, initial)] for final in initial_inneighbors]
# ╔═╡ 207b4070-f789-11ea-2f23-6945d91c5dc7
md"""
## Basic Chains
We can just iterate through the graph, choosing a random path to go down in order to create some Markov chain dragons.
First, let's figure out how to get to the next idiom.
"""
# ╔═╡ 59036480-f78f-11ea-1d45-9f33c884a7d1
function random_idiom(prev_idiom)
# get the last character of the previous idiom
first_char = last(prev_idiom)
# all possible end characters
end_chars = outneighbors(idiom_graph, term2num[first_char])
length(end_chars) == 0 && return nothing
# out of all possible end characters, choose a random one
last_char = num2term[rand(end_chars)]
# all possible idioms that start and end with these characters
possible_idioms = edge_idioms[(first_char, last_char)]
length(possible_idioms) == 0 && return nothing
# choose randomly from all possible idioms that start and end with these chars
rand(possible_idioms)
end;
# ╔═╡ d3786f40-f793-11ea-28f4-83043da67d5c
md"""
(A slight issue with this implementation is that all idioms starting with a certain character are not necessarily chosen with equal probability.)
Now, let's try it out on some idioms from our list.
"""
# ╔═╡ 6b7a0340-f789-11ea-2677-b967a8665cce
initial_idiom = rand(trad_idioms)
# ╔═╡ fbc7ad60-f790-11ea-2396-2d60b71ebb36
random_idiom(initial_idiom)
# ╔═╡ 73a9207e-f78b-11ea-3bd3-29cc530a2cdb
md"To make it repeat the process..."
# ╔═╡ 7adba030-f78b-11ea-2833-6d4cfe471978
function random_idiom_walk(idiom, len = 100)
idioms = Vector{String}()
i = 0
while idiom != nothing && i <= len
push!(idioms, idiom)
i += 1
idiom = random_idiom(idiom)
end
idioms
end;
# ╔═╡ a2c3faa0-f792-11ea-01c6-1fa82522f5ea
random_idiom_walk(initial_idiom)
# ╔═╡ 97e74cc0-f794-11ea-25f5-e73f194a7d6e
md"""
Wait a minute! These are all really short! It's really hard to actually connect them, even when repeating this experiment many times:
"""
# ╔═╡ 87d1cc50-f796-11ea-1777-1f8e16901fd9
maximum(length.(random_idiom_walk.(rand(trad_idioms, 100))))
# ╔═╡ a4cd7ee0-f795-11ea-0532-211a33e21562
md"""
Let's plot the distributions as histograms to see what's going on.
"""
# ╔═╡ c109a010-f796-11ea-170b-5de0f28d25b3
md"""
Number of Samples:
$(@bind samples Slider(10:2000, default=20, show_value=true))
"""
# ╔═╡ d1048c40-f797-11ea-3c3d-6383874745c1
md"""
Maximum Number of Iterations:
$(@bind len Slider(20:500, default=250, show_value=true))
"""
# ╔═╡ 56e84882-f796-11ea-0720-d1ee3698265e
histogram(length.(random_idiom_walk.(rand(trad_idioms, samples), len)), bins=15)
# ╔═╡ 13bf3850-f798-11ea-0591-57e00345f4c2
md"""
So it's very rare in general that we ever hit the limit on the number of iterations. Although when we do, we *really* hit it, so maybe there's something else going on, like it's a cycle in the graph (which we currently don't detect).
"""
# ╔═╡ e96e21a0-f798-11ea-3f54-ff0b05d1eb35
md"""
# Longest Cycles & Paths
If we want to prove there is a cycle, the easiest way is to see if there's an idiom that starts and ends with the same character (these are actually already excluded as LightGraphs graphs by default exclude self-loops).
"""
# ╔═╡ 42d03620-f799-11ea-3669-69a09bd1facf
[idiom for idiom in trad_idioms if first(idiom) == last(idiom)]
# ╔═╡ c4de4760-f799-11ea-21cf-d9d374c89e67
md"""
Voila! So there are already some self-loops (technically, we could have self-loops, but no cycles of length 2, but that's unlikely, so we'll just keep moving on).
What's more interesting is the longest cycle in this graph. However, this (and the similar longest path problem) is an NP-hard problem. We can still try the brute force method and hope our graph is sufficiently small and well-behaved.
"""
# ╔═╡ 1aa5b380-f79b-11ea-0a27-b5b7d7091b9f
find_longest_cycle(idiom_graph)
# ╔═╡ 33068bc0-f79b-11ea-1c88-e9d151c04ead
find_longest_path(idiom_graph)
# ╔═╡ cafc4c00-f7a8-11ea-205d-2f425d554032
gplot(idiom_graph)
# ╔═╡ 00000000-0000-0000-0000-000000000001
PLUTO_PROJECT_TOML_CONTENTS = """
[compat]
CEDICT = "~0.2.1"
GraphPlot = "~0.4.4"
LightGraphs = "~1.3.5"
LongestPaths = "~0.1.0"
Plots = "~1.16.6"
PlutoUI = "~0.7.9"
[deps]
CEDICT = "76a33514-0aea-4d2f-abaf-6a43b94fc20c"
GraphPlot = "a2cc645c-3eea-5389-862e-a155d0052231"
LightGraphs = "093fc24a-ae57-5d10-9952-331d41423f4d"
LongestPaths = "3a25c17e-307c-411a-a047-890a9a5fbb4d"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
PlutoUI = "7f904dfe-b85e-4ff6-b463-dae2292396a8"
"""
# ╔═╡ 00000000-0000-0000-0000-000000000002
PLUTO_MANIFEST_TOML_CONTENTS = """
# This file is machine-generated - editing it directly is not advised
[[Adapt]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "84918055d15b3114ede17ac6a7182f68870c16f7"
uuid = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
version = "3.3.1"
[[ArgTools]]
uuid = "0dad84c5-d112-42e6-8d28-ef12dabb789f"
[[ArnoldiMethod]]
deps = ["LinearAlgebra", "Random", "StaticArrays"]
git-tree-sha1 = "f87e559f87a45bece9c9ed97458d3afe98b1ebb9"
uuid = "ec485272-7323-5ecc-a04f-4719b315124d"
version = "0.1.0"
[[Artifacts]]
uuid = "56f22d72-fd6d-98f1-02f0-08ddc0907c33"
[[Base64]]
uuid = "2a0f44e3-6c83-55bd-87e4-b1978d98bd5f"
[[BenchmarkTools]]
deps = ["JSON", "Logging", "Printf", "Statistics", "UUIDs"]
git-tree-sha1 = "01ca3823217f474243cc2c8e6e1d1f45956fe872"
uuid = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
version = "1.0.0"
[[BinaryProvider]]
deps = ["Libdl", "Logging", "SHA"]
git-tree-sha1 = "ecdec412a9abc8db54c0efc5548c64dfce072058"
uuid = "b99e7846-7c00-51b0-8f62-c81ae34c0232"
version = "0.5.10"
[[Bzip2_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "c3598e525718abcc440f69cc6d5f60dda0a1b61e"
uuid = "6e34b625-4abd-537c-b88f-471c36dfa7a0"
version = "1.0.6+5"
[[CEDICT]]
deps = ["LazyArtifacts", "Pipe"]
git-tree-sha1 = "bc1353298886fea3126be2c440a0d6455d68c6b2"
uuid = "76a33514-0aea-4d2f-abaf-6a43b94fc20c"
version = "0.2.1"
[[Cairo_jll]]
deps = ["Artifacts", "Bzip2_jll", "Fontconfig_jll", "FreeType2_jll", "Glib_jll", "JLLWrappers", "LZO_jll", "Libdl", "Pixman_jll", "Pkg", "Xorg_libXext_jll", "Xorg_libXrender_jll", "Zlib_jll", "libpng_jll"]
git-tree-sha1 = "e2f47f6d8337369411569fd45ae5753ca10394c6"
uuid = "83423d85-b0ee-5818-9007-b63ccbeb887a"
version = "1.16.0+6"
[[Cbc]]
deps = ["BinaryProvider", "Libdl", "MathOptInterface", "MathProgBase", "SparseArrays", "Test"]
git-tree-sha1 = "62d80f448b5d77b3f0a59cecf6197aad2a3aa280"
uuid = "9961bab8-2fa3-5c5a-9d89-47fab24efd76"
version = "0.6.7"
[[Clp]]
deps = ["BinaryProvider", "Clp_jll", "Libdl", "LinearAlgebra", "MathOptInterface", "MathProgBase", "SparseArrays"]
git-tree-sha1 = "08ca3c2fb7321ccab2f512f4fb77291e696a7d9b"
uuid = "e2554f3b-3117-50c0-817c-e040a3ddf72d"
version = "0.7.2"
[[Clp_jll]]
deps = ["Artifacts", "CoinUtils_jll", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "OpenBLAS32_jll", "Osi_jll", "Pkg"]
git-tree-sha1 = "d9eca9fa2435959b5542b13409a8ec5f64c947c8"
uuid = "06985876-5285-5a41-9fcb-8948a742cc53"
version = "1.17.6+7"
[[CodecBzip2]]
deps = ["Bzip2_jll", "Libdl", "TranscodingStreams"]
git-tree-sha1 = "2e62a725210ce3c3c2e1a3080190e7ca491f18d7"
uuid = "523fee87-0ab8-5b00-afb7-3ecf72e48cfd"
version = "0.7.2"
[[CodecZlib]]
deps = ["TranscodingStreams", "Zlib_jll"]
git-tree-sha1 = "ded953804d019afa9a3f98981d99b33e3db7b6da"
uuid = "944b1d66-785c-5afd-91f1-9de20f533193"
version = "0.7.0"
[[CoinUtils_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "OpenBLAS32_jll", "Pkg"]
git-tree-sha1 = "5186155a8609b71eae7e104fa2b8fbf6ecd5d9bb"
uuid = "be027038-0da8-5614-b30d-e42594cb92df"
version = "2.11.3+4"
[[ColorSchemes]]
deps = ["ColorTypes", "Colors", "FixedPointNumbers", "Random", "StaticArrays"]
git-tree-sha1 = "c8fd01e4b736013bc61b704871d20503b33ea402"
uuid = "35d6a980-a343-548e-a6ea-1d62b119f2f4"
version = "3.12.1"
[[ColorTypes]]
deps = ["FixedPointNumbers", "Random"]
git-tree-sha1 = "32a2b8af383f11cbb65803883837a149d10dfe8a"
uuid = "3da002f7-5984-5a60-b8a6-cbb66c0b333f"
version = "0.10.12"
[[Colors]]
deps = ["ColorTypes", "FixedPointNumbers", "Reexport"]
git-tree-sha1 = "417b0ed7b8b838aa6ca0a87aadf1bb9eb111ce40"
uuid = "5ae59095-9a9b-59fe-a467-6f913c188581"
version = "0.12.8"
[[Compat]]
deps = ["Base64", "Dates", "DelimitedFiles", "Distributed", "InteractiveUtils", "LibGit2", "Libdl", "LinearAlgebra", "Markdown", "Mmap", "Pkg", "Printf", "REPL", "Random", "SHA", "Serialization", "SharedArrays", "Sockets", "SparseArrays", "Statistics", "Test", "UUIDs", "Unicode"]
git-tree-sha1 = "dc7dedc2c2aa9faf59a55c622760a25cbefbe941"
uuid = "34da2185-b29b-5c13-b0c7-acf172513d20"
version = "3.31.0"
[[CompilerSupportLibraries_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "e66e0078-7015-5450-92f7-15fbd957f2ae"
[[Compose]]
deps = ["Base64", "Colors", "DataStructures", "Dates", "IterTools", "JSON", "LinearAlgebra", "Measures", "Printf", "Random", "Requires", "Statistics", "UUIDs"]
git-tree-sha1 = "c6461fc7c35a4bb8d00905df7adafcff1fe3a6bc"
uuid = "a81c6b42-2e10-5240-aca2-a61377ecd94b"
version = "0.9.2"
[[Contour]]
deps = ["StaticArrays"]
git-tree-sha1 = "9f02045d934dc030edad45944ea80dbd1f0ebea7"
uuid = "d38c429a-6771-53c6-b99e-75d170b6e991"
version = "0.5.7"
[[DataAPI]]
git-tree-sha1 = "ee400abb2298bd13bfc3df1c412ed228061a2385"
uuid = "9a962f9c-6df0-11e9-0e5d-c546b8b5ee8a"
version = "1.7.0"
[[DataStructures]]
deps = ["Compat", "InteractiveUtils", "OrderedCollections"]
git-tree-sha1 = "4437b64df1e0adccc3e5d1adbc3ac741095e4677"
uuid = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
version = "0.18.9"
[[DataValueInterfaces]]
git-tree-sha1 = "bfc1187b79289637fa0ef6d4436ebdfe6905cbd6"
uuid = "e2d170a0-9d28-54be-80f0-106bbe20a464"
version = "1.0.0"
[[Dates]]
deps = ["Printf"]
uuid = "ade2ca70-3891-5945-98fb-dc099432e06a"
[[DelimitedFiles]]
deps = ["Mmap"]
uuid = "8bb1440f-4735-579b-a4ab-409b98df4dab"
[[Distributed]]
deps = ["Random", "Serialization", "Sockets"]
uuid = "8ba89e20-285c-5b6f-9357-94700520ee1b"
[[Downloads]]
deps = ["ArgTools", "LibCURL", "NetworkOptions"]
uuid = "f43a241f-c20a-4ad4-852c-f6b1247861c6"
[[EarCut_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "92d8f9f208637e8d2d28c664051a00569c01493d"
uuid = "5ae413db-bbd1-5e63-b57d-d24a61df00f5"
version = "2.1.5+1"
[[Expat_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "b3bfd02e98aedfa5cf885665493c5598c350cd2f"
uuid = "2e619515-83b5-522b-bb60-26c02a35a201"
version = "2.2.10+0"
[[FFMPEG]]
deps = ["FFMPEG_jll"]
git-tree-sha1 = "b57e3acbe22f8484b4b5ff66a7499717fe1a9cc8"
uuid = "c87230d0-a227-11e9-1b43-d7ebe4e7570a"
version = "0.4.1"
[[FFMPEG_jll]]
deps = ["Artifacts", "Bzip2_jll", "FreeType2_jll", "FriBidi_jll", "JLLWrappers", "LAME_jll", "LibVPX_jll", "Libdl", "Ogg_jll", "OpenSSL_jll", "Opus_jll", "Pkg", "Zlib_jll", "libass_jll", "libfdk_aac_jll", "libvorbis_jll", "x264_jll", "x265_jll"]
git-tree-sha1 = "3cc57ad0a213808473eafef4845a74766242e05f"
uuid = "b22a6f82-2f65-5046-a5b2-351ab43fb4e5"
version = "4.3.1+4"
[[FixedPointNumbers]]
deps = ["Statistics"]
git-tree-sha1 = "335bfdceacc84c5cdf16aadc768aa5ddfc5383cc"
uuid = "53c48c17-4a7d-5ca2-90c5-79b7896eea93"
version = "0.8.4"
[[Fontconfig_jll]]
deps = ["Artifacts", "Bzip2_jll", "Expat_jll", "FreeType2_jll", "JLLWrappers", "Libdl", "Libuuid_jll", "Pkg", "Zlib_jll"]
git-tree-sha1 = "35895cf184ceaab11fd778b4590144034a167a2f"
uuid = "a3f928ae-7b40-5064-980b-68af3947d34b"
version = "2.13.1+14"
[[Formatting]]
deps = ["Printf"]
git-tree-sha1 = "8339d61043228fdd3eb658d86c926cb282ae72a8"
uuid = "59287772-0a20-5a39-b81b-1366585eb4c0"
version = "0.4.2"
[[FreeType2_jll]]
deps = ["Artifacts", "Bzip2_jll", "JLLWrappers", "Libdl", "Pkg", "Zlib_jll"]
git-tree-sha1 = "cbd58c9deb1d304f5a245a0b7eb841a2560cfec6"
uuid = "d7e528f0-a631-5988-bf34-fe36492bcfd7"
version = "2.10.1+5"
[[FriBidi_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "aa31987c2ba8704e23c6c8ba8a4f769d5d7e4f91"
uuid = "559328eb-81f9-559d-9380-de523a88c83c"
version = "1.0.10+0"
[[GLFW_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Libglvnd_jll", "Pkg", "Xorg_libXcursor_jll", "Xorg_libXi_jll", "Xorg_libXinerama_jll", "Xorg_libXrandr_jll"]
git-tree-sha1 = "dba1e8614e98949abfa60480b13653813d8f0157"
uuid = "0656b61e-2033-5cc2-a64a-77c0f6c09b89"
version = "3.3.5+0"
[[GR]]
deps = ["Base64", "DelimitedFiles", "GR_jll", "HTTP", "JSON", "Libdl", "LinearAlgebra", "Pkg", "Printf", "Random", "Serialization", "Sockets", "Test", "UUIDs"]
git-tree-sha1 = "b83e3125048a9c3158cbb7ca423790c7b1b57bea"
uuid = "28b8d3ca-fb5f-59d9-8090-bfdbd6d07a71"
version = "0.57.5"
[[GR_jll]]
deps = ["Artifacts", "Bzip2_jll", "Cairo_jll", "FFMPEG_jll", "Fontconfig_jll", "GLFW_jll", "JLLWrappers", "JpegTurbo_jll", "Libdl", "Libtiff_jll", "Pixman_jll", "Pkg", "Qt5Base_jll", "Zlib_jll", "libpng_jll"]
git-tree-sha1 = "e14907859a1d3aee73a019e7b3c98e9e7b8b5b3e"
uuid = "d2c73de3-f751-5644-a686-071e5b155ba9"
version = "0.57.3+0"
[[GeometryBasics]]
deps = ["EarCut_jll", "IterTools", "LinearAlgebra", "StaticArrays", "StructArrays", "Tables"]
git-tree-sha1 = "4136b8a5668341e58398bb472754bff4ba0456ff"
uuid = "5c1252a2-5f33-56bf-86c9-59e7332b4326"
version = "0.3.12"
[[Gettext_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Libiconv_jll", "Pkg", "XML2_jll"]
git-tree-sha1 = "9b02998aba7bf074d14de89f9d37ca24a1a0b046"
uuid = "78b55507-aeef-58d4-861c-77aaff3498b1"
version = "0.21.0+0"
[[Glib_jll]]
deps = ["Artifacts", "Gettext_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Libiconv_jll", "Libmount_jll", "PCRE_jll", "Pkg", "Zlib_jll"]
git-tree-sha1 = "47ce50b742921377301e15005c96e979574e130b"
uuid = "7746bdde-850d-59dc-9ae8-88ece973131d"
version = "2.68.1+0"
[[GraphPlot]]
deps = ["ArnoldiMethod", "ColorTypes", "Colors", "Compose", "DelimitedFiles", "LightGraphs", "LinearAlgebra", "Random", "SparseArrays"]
git-tree-sha1 = "dd8f15128a91b0079dfe3f4a4a1e190e54ac7164"
uuid = "a2cc645c-3eea-5389-862e-a155d0052231"
version = "0.4.4"
[[Grisu]]
git-tree-sha1 = "53bb909d1151e57e2484c3d1b53e19552b887fb2"
uuid = "42e2da0e-8278-4e71-bc24-59509adca0fe"
version = "1.0.2"
[[HTTP]]
deps = ["Base64", "Dates", "IniFile", "MbedTLS", "NetworkOptions", "Sockets", "URIs"]
git-tree-sha1 = "86ed84701fbfd1142c9786f8e53c595ff5a4def9"
uuid = "cd3eb016-35fb-5094-929b-558a96fad6f3"
version = "0.9.10"
[[Inflate]]
git-tree-sha1 = "f5fc07d4e706b84f72d54eedcc1c13d92fb0871c"
uuid = "d25df0c9-e2be-5dd7-82c8-3ad0b3e990b9"
version = "0.1.2"
[[IniFile]]
deps = ["Test"]
git-tree-sha1 = "098e4d2c533924c921f9f9847274f2ad89e018b8"
uuid = "83e8ac13-25f8-5344-8a64-a9f2b223428f"
version = "0.5.0"
[[InteractiveUtils]]
deps = ["Markdown"]
uuid = "b77e0a4c-d291-57a0-90e8-8db25a27a240"
[[IterTools]]
git-tree-sha1 = "05110a2ab1fc5f932622ffea2a003221f4782c18"
uuid = "c8e1da08-722c-5040-9ed9-7db0dc04731e"
version = "1.3.0"
[[IteratorInterfaceExtensions]]
git-tree-sha1 = "a3f24677c21f5bbe9d2a714f95dcd58337fb2856"
uuid = "82899510-4779-5014-852e-03e436cf321d"
version = "1.0.0"
[[JLLWrappers]]
deps = ["Preferences"]
git-tree-sha1 = "642a199af8b68253517b80bd3bfd17eb4e84df6e"
uuid = "692b3bcd-3c85-4b1f-b108-f13ce0eb3210"
version = "1.3.0"
[[JSON]]
deps = ["Dates", "Mmap", "Parsers", "Unicode"]
git-tree-sha1 = "81690084b6198a2e1da36fcfda16eeca9f9f24e4"
uuid = "682c06a0-de6a-54ab-a142-c8b1cf79cde6"
version = "0.21.1"
[[JSONSchema]]
deps = ["HTTP", "JSON", "ZipFile"]
git-tree-sha1 = "b84ab8139afde82c7c65ba2b792fe12e01dd7307"
uuid = "7d188eb4-7ad8-530c-ae41-71a32a6d4692"
version = "0.3.3"
[[JpegTurbo_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "d735490ac75c5cb9f1b00d8b5509c11984dc6943"
uuid = "aacddb02-875f-59d6-b918-886e6ef4fbf8"
version = "2.1.0+0"
[[LAME_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "f6250b16881adf048549549fba48b1161acdac8c"
uuid = "c1c5ebd0-6772-5130-a774-d5fcae4a789d"
version = "3.100.1+0"
[[LZO_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "e5b909bcf985c5e2605737d2ce278ed791b89be6"
uuid = "dd4b983a-f0e5-5f8d-a1b7-129d4a5fb1ac"
version = "2.10.1+0"
[[LaTeXStrings]]
git-tree-sha1 = "c7f1c695e06c01b95a67f0cd1d34994f3e7db104"
uuid = "b964fa9f-0449-5b57-a5c2-d3ea65f4040f"
version = "1.2.1"
[[Latexify]]
deps = ["Formatting", "InteractiveUtils", "LaTeXStrings", "MacroTools", "Markdown", "Printf", "Requires"]
git-tree-sha1 = "a4b12a1bd2ebade87891ab7e36fdbce582301a92"
uuid = "23fbe1c1-3f47-55db-b15f-69d7ec21a316"
version = "0.15.6"
[[LazyArtifacts]]
deps = ["Artifacts", "Pkg"]
uuid = "4af54fe1-eca0-43a8-85a7-787d91b784e3"
[[LibCURL]]
deps = ["LibCURL_jll", "MozillaCACerts_jll"]
uuid = "b27032c2-a3e7-50c8-80cd-2d36dbcbfd21"
[[LibCURL_jll]]
deps = ["Artifacts", "LibSSH2_jll", "Libdl", "MbedTLS_jll", "Zlib_jll", "nghttp2_jll"]
uuid = "deac9b47-8bc7-5906-a0fe-35ac56dc84c0"
[[LibGit2]]
deps = ["Base64", "NetworkOptions", "Printf", "SHA"]
uuid = "76f85450-5226-5b5a-8eaa-529ad045b433"
[[LibSSH2_jll]]
deps = ["Artifacts", "Libdl", "MbedTLS_jll"]
uuid = "29816b5a-b9ab-546f-933c-edad1886dfa8"
[[LibVPX_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "12ee7e23fa4d18361e7c2cde8f8337d4c3101bc7"
uuid = "dd192d2f-8180-539f-9fb4-cc70b1dcf69a"
version = "1.10.0+0"
[[Libdl]]
uuid = "8f399da3-3557-5675-b5ff-fb832c97cbdb"
[[Libffi_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "761a393aeccd6aa92ec3515e428c26bf99575b3b"
uuid = "e9f186c6-92d2-5b65-8a66-fee21dc1b490"
version = "3.2.2+0"
[[Libgcrypt_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Libgpg_error_jll", "Pkg"]
git-tree-sha1 = "64613c82a59c120435c067c2b809fc61cf5166ae"
uuid = "d4300ac3-e22c-5743-9152-c294e39db1e4"
version = "1.8.7+0"
[[Libglvnd_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libX11_jll", "Xorg_libXext_jll"]
git-tree-sha1 = "7739f837d6447403596a75d19ed01fd08d6f56bf"
uuid = "7e76a0d4-f3c7-5321-8279-8d96eeed0f29"
version = "1.3.0+3"
[[Libgpg_error_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "c333716e46366857753e273ce6a69ee0945a6db9"
uuid = "7add5ba3-2f88-524e-9cd5-f83b8a55f7b8"
version = "1.42.0+0"
[[Libiconv_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "42b62845d70a619f063a7da093d995ec8e15e778"
uuid = "94ce4f54-9a6c-5748-9c1c-f9c7231a4531"
version = "1.16.1+1"
[[Libmount_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "9c30530bf0effd46e15e0fdcf2b8636e78cbbd73"
uuid = "4b2f31a3-9ecc-558c-b454-b3730dcb73e9"
version = "2.35.0+0"
[[Libtiff_jll]]
deps = ["Artifacts", "JLLWrappers", "JpegTurbo_jll", "Libdl", "Pkg", "Zlib_jll", "Zstd_jll"]
git-tree-sha1 = "340e257aada13f95f98ee352d316c3bed37c8ab9"
uuid = "89763e89-9b03-5906-acba-b20f662cd828"
version = "4.3.0+0"
[[Libuuid_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "7f3efec06033682db852f8b3bc3c1d2b0a0ab066"
uuid = "38a345b3-de98-5d2b-a5d3-14cd9215e700"
version = "2.36.0+0"
[[LightGraphs]]
deps = ["ArnoldiMethod", "DataStructures", "Distributed", "Inflate", "LinearAlgebra", "Random", "SharedArrays", "SimpleTraits", "SparseArrays", "Statistics"]
git-tree-sha1 = "432428df5f360964040ed60418dd5601ecd240b6"
uuid = "093fc24a-ae57-5d10-9952-331d41423f4d"
version = "1.3.5"
[[LinearAlgebra]]
deps = ["Libdl"]
uuid = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
[[Logging]]
uuid = "56ddb016-857b-54e1-b83d-db4d58db5568"
[[LongestPaths]]
deps = ["Cbc", "Clp", "LightGraphs", "MathProgBase", "Printf", "Random", "SparseArrays", "Test"]
git-tree-sha1 = "a1224131e9193721a56abbfbc44825c120a2bc8e"
uuid = "3a25c17e-307c-411a-a047-890a9a5fbb4d"
version = "0.1.0"
[[MacroTools]]
deps = ["Markdown", "Random"]
git-tree-sha1 = "6a8a2a625ab0dea913aba95c11370589e0239ff0"
uuid = "1914dd2f-81c6-5fcd-8719-6d5c9610ff09"
version = "0.5.6"
[[Markdown]]
deps = ["Base64"]
uuid = "d6f4376e-aef5-505a-96c1-9c027394607a"
[[MathOptInterface]]
deps = ["BenchmarkTools", "CodecBzip2", "CodecZlib", "JSON", "JSONSchema", "LinearAlgebra", "MutableArithmetics", "OrderedCollections", "SparseArrays", "Test", "Unicode"]
git-tree-sha1 = "575644e3c05b258250bb599e57cf73bbf1062901"
uuid = "b8f27783-ece8-5eb3-8dc8-9495eed66fee"
version = "0.9.22"
[[MathProgBase]]
deps = ["LinearAlgebra", "SparseArrays"]
git-tree-sha1 = "9abbe463a1e9fc507f12a69e7f29346c2cdc472c"
uuid = "fdba3010-5040-5b88-9595-932c9decdf73"
version = "0.7.8"
[[MbedTLS]]
deps = ["Dates", "MbedTLS_jll", "Random", "Sockets"]
git-tree-sha1 = "1c38e51c3d08ef2278062ebceade0e46cefc96fe"
uuid = "739be429-bea8-5141-9913-cc70e7f3736d"
version = "1.0.3"
[[MbedTLS_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "c8ffd9c3-330d-5841-b78e-0817d7145fa1"
[[Measures]]
git-tree-sha1 = "e498ddeee6f9fdb4551ce855a46f54dbd900245f"
uuid = "442fdcdd-2543-5da2-b0f3-8c86c306513e"
version = "0.3.1"
[[Missings]]
deps = ["DataAPI"]
git-tree-sha1 = "4ea90bd5d3985ae1f9a908bd4500ae88921c5ce7"
uuid = "e1d29d7a-bbdc-5cf2-9ac0-f12de2c33e28"
version = "1.0.0"
[[Mmap]]
uuid = "a63ad114-7e13-5084-954f-fe012c677804"
[[MozillaCACerts_jll]]
uuid = "14a3606d-f60d-562e-9121-12d972cd8159"
[[MutableArithmetics]]
deps = ["LinearAlgebra", "SparseArrays", "Test"]
git-tree-sha1 = "3927848ccebcc165952dc0d9ac9aa274a87bfe01"
uuid = "d8a4904e-b15c-11e9-3269-09a3773c0cb0"
version = "0.2.20"
[[NaNMath]]
git-tree-sha1 = "bfe47e760d60b82b66b61d2d44128b62e3a369fb"
uuid = "77ba4419-2d1f-58cd-9bb1-8ffee604a2e3"
version = "0.3.5"
[[NetworkOptions]]
uuid = "ca575930-c2e3-43a9-ace4-1e988b2c1908"
[[Ogg_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "7937eda4681660b4d6aeeecc2f7e1c81c8ee4e2f"
uuid = "e7412a2a-1a6e-54c0-be00-318e2571c051"
version = "1.3.5+0"
[[OpenBLAS32_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "ba4a8f683303c9082e84afba96f25af3c7fb2436"
uuid = "656ef2d0-ae68-5445-9ca0-591084a874a2"
version = "0.3.12+1"
[[OpenSSL_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "15003dcb7d8db3c6c857fda14891a539a8f2705a"
uuid = "458c3c95-2e84-50aa-8efc-19380b2a3a95"
version = "1.1.10+0"
[[Opus_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "51a08fb14ec28da2ec7a927c4337e4332c2a4720"
uuid = "91d4177d-7536-5919-b921-800302f37372"
version = "1.3.2+0"
[[OrderedCollections]]
git-tree-sha1 = "85f8e6578bf1f9ee0d11e7bb1b1456435479d47c"
uuid = "bac558e1-5e72-5ebc-8fee-abe8a469f55d"
version = "1.4.1"
[[Osi_jll]]
deps = ["Artifacts", "CoinUtils_jll", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "OpenBLAS32_jll", "Pkg"]
git-tree-sha1 = "ef540e28c9b82cb879e33c0885e1bbc9a1e6c571"
uuid = "7da25872-d9ce-5375-a4d3-7a845f58efdd"
version = "0.108.5+4"
[[PCRE_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "b2a7af664e098055a7529ad1a900ded962bca488"
uuid = "2f80f16e-611a-54ab-bc61-aa92de5b98fc"
version = "8.44.0+0"
[[Parsers]]
deps = ["Dates"]
git-tree-sha1 = "c8abc88faa3f7a3950832ac5d6e690881590d6dc"
uuid = "69de0a69-1ddd-5017-9359-2bf0b02dc9f0"
version = "1.1.0"
[[Pipe]]
git-tree-sha1 = "6842804e7867b115ca9de748a0cf6b364523c16d"
uuid = "b98c9c47-44ae-5843-9183-064241ee97a0"
version = "1.3.0"
[[Pixman_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "b4f5d02549a10e20780a24fce72bea96b6329e29"
uuid = "30392449-352a-5448-841d-b1acce4e97dc"
version = "0.40.1+0"
[[Pkg]]
deps = ["Artifacts", "Dates", "Downloads", "LibGit2", "Libdl", "Logging", "Markdown", "Printf", "REPL", "Random", "SHA", "Serialization", "TOML", "Tar", "UUIDs", "p7zip_jll"]
uuid = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f"
[[PlotThemes]]
deps = ["PlotUtils", "Requires", "Statistics"]
git-tree-sha1 = "a3a964ce9dc7898193536002a6dd892b1b5a6f1d"
uuid = "ccf2f8ad-2431-5c83-bf29-c5338b663b6a"
version = "2.0.1"
[[PlotUtils]]
deps = ["ColorSchemes", "Colors", "Dates", "Printf", "Random", "Reexport", "Statistics"]
git-tree-sha1 = "ae9a295ac761f64d8c2ec7f9f24d21eb4ffba34d"
uuid = "995b91a9-d308-5afd-9ec6-746e21dbc043"
version = "1.0.10"
[[Plots]]
deps = ["Base64", "Contour", "Dates", "FFMPEG", "FixedPointNumbers", "GR", "GeometryBasics", "JSON", "Latexify", "LinearAlgebra", "Measures", "NaNMath", "PlotThemes", "PlotUtils", "Printf", "REPL", "Random", "RecipesBase", "RecipesPipeline", "Reexport", "Requires", "Scratch", "Showoff", "SparseArrays", "Statistics", "StatsBase", "UUIDs"]
git-tree-sha1 = "a680b659a1ba99d3663a40aa9acffd67768a410f"
uuid = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
version = "1.16.6"
[[PlutoUI]]
deps = ["Base64", "Dates", "InteractiveUtils", "JSON", "Logging", "Markdown", "Random", "Reexport", "Suppressor"]
git-tree-sha1 = "44e225d5837e2a2345e69a1d1e01ac2443ff9fcb"
uuid = "7f904dfe-b85e-4ff6-b463-dae2292396a8"
version = "0.7.9"
[[Preferences]]
deps = ["TOML"]
git-tree-sha1 = "00cfd92944ca9c760982747e9a1d0d5d86ab1e5a"
uuid = "21216c6a-2e73-6563-6e65-726566657250"
version = "1.2.2"
[[Printf]]
deps = ["Unicode"]
uuid = "de0858da-6303-5e67-8744-51eddeeeb8d7"
[[Qt5Base_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "Fontconfig_jll", "Glib_jll", "JLLWrappers", "Libdl", "Libglvnd_jll", "OpenSSL_jll", "Pkg", "Xorg_libXext_jll", "Xorg_libxcb_jll", "Xorg_xcb_util_image_jll", "Xorg_xcb_util_keysyms_jll", "Xorg_xcb_util_renderutil_jll", "Xorg_xcb_util_wm_jll", "Zlib_jll", "xkbcommon_jll"]
git-tree-sha1 = "ad368663a5e20dbb8d6dc2fddeefe4dae0781ae8"
uuid = "ea2cea3b-5b76-57ae-a6ef-0a8af62496e1"
version = "5.15.3+0"
[[REPL]]
deps = ["InteractiveUtils", "Markdown", "Sockets", "Unicode"]
uuid = "3fa0cd96-eef1-5676-8a61-b3b8758bbffb"
[[Random]]
deps = ["Serialization"]
uuid = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
[[RecipesBase]]
git-tree-sha1 = "b3fb709f3c97bfc6e948be68beeecb55a0b340ae"
uuid = "3cdcf5f2-1ef4-517c-9805-6587b60abb01"
version = "1.1.1"
[[RecipesPipeline]]
deps = ["Dates", "NaNMath", "PlotUtils", "RecipesBase"]
git-tree-sha1 = "9b8e57e3cca8828a1bc759840bfe48d64db9abfb"
uuid = "01d81517-befc-4cb6-b9ec-a95719d0359c"
version = "0.3.3"
[[Reexport]]
git-tree-sha1 = "5f6c21241f0f655da3952fd60aa18477cf96c220"
uuid = "189a3867-3050-52da-a836-e630ba90ab69"
version = "1.1.0"
[[Requires]]
deps = ["UUIDs"]
git-tree-sha1 = "4036a3bd08ac7e968e27c203d45f5fff15020621"
uuid = "ae029012-a4dd-5104-9daa-d747884805df"
version = "1.1.3"
[[SHA]]
uuid = "ea8e919c-243c-51af-8825-aaa63cd721ce"
[[Scratch]]
deps = ["Dates"]
git-tree-sha1 = "0b4b7f1393cff97c33891da2a0bf69c6ed241fda"
uuid = "6c6a2e73-6563-6170-7368-637461726353"
version = "1.1.0"
[[Serialization]]
uuid = "9e88b42a-f829-5b0c-bbe9-9e923198166b"
[[SharedArrays]]
deps = ["Distributed", "Mmap", "Random", "Serialization"]
uuid = "1a1011a3-84de-559e-8e89-a11a2f7dc383"
[[Showoff]]
deps = ["Dates", "Grisu"]
git-tree-sha1 = "91eddf657aca81df9ae6ceb20b959ae5653ad1de"
uuid = "992d4aef-0814-514b-bc4d-f2e9a6c4116f"
version = "1.0.3"
[[SimpleTraits]]
deps = ["InteractiveUtils", "MacroTools"]
git-tree-sha1 = "daf7aec3fe3acb2131388f93a4c409b8c7f62226"
uuid = "699a6c99-e7fa-54fc-8d76-47d257e15c1d"
version = "0.9.3"
[[Sockets]]
uuid = "6462fe0b-24de-5631-8697-dd941f90decc"
[[SortingAlgorithms]]
deps = ["DataStructures"]
git-tree-sha1 = "2ec1962eba973f383239da22e75218565c390a96"
uuid = "a2af1166-a08f-5f64-846c-94a0d3cef48c"
version = "1.0.0"
[[SparseArrays]]
deps = ["LinearAlgebra", "Random"]
uuid = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
[[StaticArrays]]
deps = ["LinearAlgebra", "Random", "Statistics"]
git-tree-sha1 = "745914ebcd610da69f3cb6bf76cb7bb83dcb8c9a"
uuid = "90137ffa-7385-5640-81b9-e52037218182"
version = "1.2.4"
[[Statistics]]
deps = ["LinearAlgebra", "SparseArrays"]
uuid = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
[[StatsAPI]]
git-tree-sha1 = "1958272568dc176a1d881acb797beb909c785510"
uuid = "82ae8749-77ed-4fe6-ae5f-f523153014b0"
version = "1.0.0"
[[StatsBase]]
deps = ["DataAPI", "DataStructures", "LinearAlgebra", "Missings", "Printf", "Random", "SortingAlgorithms", "SparseArrays", "Statistics", "StatsAPI"]
git-tree-sha1 = "2f6792d523d7448bbe2fec99eca9218f06cc746d"
uuid = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
version = "0.33.8"
[[StructArrays]]
deps = ["Adapt", "DataAPI", "Tables"]
git-tree-sha1 = "44b3afd37b17422a62aea25f04c1f7e09ce6b07f"
uuid = "09ab397b-f2b6-538f-b94a-2f83cf4a842a"
version = "0.5.1"
[[Suppressor]]
git-tree-sha1 = "a819d77f31f83e5792a76081eee1ea6342ab8787"
uuid = "fd094767-a336-5f1f-9728-57cf17d0bbfb"
version = "0.2.0"
[[TOML]]
deps = ["Dates"]
uuid = "fa267f1f-6049-4f14-aa54-33bafae1ed76"
[[TableTraits]]
deps = ["IteratorInterfaceExtensions"]
git-tree-sha1 = "c06b2f539df1c6efa794486abfb6ed2022561a39"
uuid = "3783bdb8-4a98-5b6b-af9a-565f29a5fe9c"
version = "1.0.1"
[[Tables]]
deps = ["DataAPI", "DataValueInterfaces", "IteratorInterfaceExtensions", "LinearAlgebra", "TableTraits", "Test"]
git-tree-sha1 = "8ed4a3ea724dac32670b062be3ef1c1de6773ae8"
uuid = "bd369af6-aec1-5ad0-b16a-f7cc5008161c"
version = "1.4.4"
[[Tar]]
deps = ["ArgTools", "SHA"]
uuid = "a4e569a6-e804-4fa4-b0f3-eef7a1d5b13e"
[[Test]]
deps = ["InteractiveUtils", "Logging", "Random", "Serialization"]
uuid = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
[[TranscodingStreams]]
deps = ["Random", "Test"]
git-tree-sha1 = "7c53c35547de1c5b9d46a4797cf6d8253807108c"
uuid = "3bb67fe8-82b1-5028-8e26-92a6c54297fa"
version = "0.9.5"
[[URIs]]
git-tree-sha1 = "97bbe755a53fe859669cd907f2d96aee8d2c1355"
uuid = "5c2747f8-b7ea-4ff2-ba2e-563bfd36b1d4"
version = "1.3.0"
[[UUIDs]]
deps = ["Random", "SHA"]
uuid = "cf7118a7-6976-5b1a-9a39-7adc72f591a4"
[[Unicode]]
uuid = "4ec0a83e-493e-50e2-b9ac-8f72acf5a8f5"
[[Wayland_jll]]
deps = ["Artifacts", "Expat_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Pkg", "XML2_jll"]
git-tree-sha1 = "3e61f0b86f90dacb0bc0e73a0c5a83f6a8636e23"
uuid = "a2964d1f-97da-50d4-b82a-358c7fce9d89"
version = "1.19.0+0"
[[Wayland_protocols_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Wayland_jll"]
git-tree-sha1 = "2839f1c1296940218e35df0bbb220f2a79686670"
uuid = "2381bf8a-dfd0-557d-9999-79630e7b1b91"
version = "1.18.0+4"
[[XML2_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Libiconv_jll", "Pkg", "Zlib_jll"]
git-tree-sha1 = "1acf5bdf07aa0907e0a37d3718bb88d4b687b74a"
uuid = "02c8fc9c-b97f-50b9-bbe4-9be30ff0a78a"
version = "2.9.12+0"
[[XSLT_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Libgcrypt_jll", "Libgpg_error_jll", "Libiconv_jll", "Pkg", "XML2_jll", "Zlib_jll"]
git-tree-sha1 = "91844873c4085240b95e795f692c4cec4d805f8a"
uuid = "aed1982a-8fda-507f-9586-7b0439959a61"
version = "1.1.34+0"
[[Xorg_libX11_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libxcb_jll", "Xorg_xtrans_jll"]
git-tree-sha1 = "5be649d550f3f4b95308bf0183b82e2582876527"
uuid = "4f6342f7-b3d2-589e-9d20-edeb45f2b2bc"
version = "1.6.9+4"
[[Xorg_libXau_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "4e490d5c960c314f33885790ed410ff3a94ce67e"
uuid = "0c0b7dd1-d40b-584c-a123-a41640f87eec"
version = "1.0.9+4"
[[Xorg_libXcursor_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libXfixes_jll", "Xorg_libXrender_jll"]
git-tree-sha1 = "12e0eb3bc634fa2080c1c37fccf56f7c22989afd"
uuid = "935fb764-8cf2-53bf-bb30-45bb1f8bf724"
version = "1.2.0+4"
[[Xorg_libXdmcp_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "4fe47bd2247248125c428978740e18a681372dd4"
uuid = "a3789734-cfe1-5b06-b2d0-1dd0d9d62d05"
version = "1.1.3+4"
[[Xorg_libXext_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libX11_jll"]
git-tree-sha1 = "b7c0aa8c376b31e4852b360222848637f481f8c3"
uuid = "1082639a-0dae-5f34-9b06-72781eeb8cb3"
version = "1.3.4+4"
[[Xorg_libXfixes_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libX11_jll"]
git-tree-sha1 = "0e0dc7431e7a0587559f9294aeec269471c991a4"
uuid = "d091e8ba-531a-589c-9de9-94069b037ed8"
version = "5.0.3+4"
[[Xorg_libXi_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libXext_jll", "Xorg_libXfixes_jll"]
git-tree-sha1 = "89b52bc2160aadc84d707093930ef0bffa641246"
uuid = "a51aa0fd-4e3c-5386-b890-e753decda492"
version = "1.7.10+4"
[[Xorg_libXinerama_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libXext_jll"]
git-tree-sha1 = "26be8b1c342929259317d8b9f7b53bf2bb73b123"
uuid = "d1454406-59df-5ea1-beac-c340f2130bc3"
version = "1.1.4+4"
[[Xorg_libXrandr_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libXext_jll", "Xorg_libXrender_jll"]
git-tree-sha1 = "34cea83cb726fb58f325887bf0612c6b3fb17631"
uuid = "ec84b674-ba8e-5d96-8ba1-2a689ba10484"
version = "1.5.2+4"
[[Xorg_libXrender_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libX11_jll"]
git-tree-sha1 = "19560f30fd49f4d4efbe7002a1037f8c43d43b96"
uuid = "ea2f1a96-1ddc-540d-b46f-429655e07cfa"
version = "0.9.10+4"
[[Xorg_libpthread_stubs_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "6783737e45d3c59a4a4c4091f5f88cdcf0908cbb"
uuid = "14d82f49-176c-5ed1-bb49-ad3f5cbd8c74"
version = "0.1.0+3"
[[Xorg_libxcb_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "XSLT_jll", "Xorg_libXau_jll", "Xorg_libXdmcp_jll", "Xorg_libpthread_stubs_jll"]
git-tree-sha1 = "daf17f441228e7a3833846cd048892861cff16d6"
uuid = "c7cfdc94-dc32-55de-ac96-5a1b8d977c5b"
version = "1.13.0+3"
[[Xorg_libxkbfile_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libX11_jll"]
git-tree-sha1 = "926af861744212db0eb001d9e40b5d16292080b2"
uuid = "cc61e674-0454-545c-8b26-ed2c68acab7a"
version = "1.1.0+4"
[[Xorg_xcb_util_image_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_xcb_util_jll"]
git-tree-sha1 = "0fab0a40349ba1cba2c1da699243396ff8e94b97"
uuid = "12413925-8142-5f55-bb0e-6d7ca50bb09b"
version = "0.4.0+1"
[[Xorg_xcb_util_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libxcb_jll"]
git-tree-sha1 = "e7fd7b2881fa2eaa72717420894d3938177862d1"
uuid = "2def613f-5ad1-5310-b15b-b15d46f528f5"
version = "0.4.0+1"
[[Xorg_xcb_util_keysyms_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_xcb_util_jll"]
git-tree-sha1 = "d1151e2c45a544f32441a567d1690e701ec89b00"
uuid = "975044d2-76e6-5fbe-bf08-97ce7c6574c7"
version = "0.4.0+1"
[[Xorg_xcb_util_renderutil_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_xcb_util_jll"]
git-tree-sha1 = "dfd7a8f38d4613b6a575253b3174dd991ca6183e"
uuid = "0d47668e-0667-5a69-a72c-f761630bfb7e"
version = "0.3.9+1"
[[Xorg_xcb_util_wm_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_xcb_util_jll"]
git-tree-sha1 = "e78d10aab01a4a154142c5006ed44fd9e8e31b67"
uuid = "c22f9ab0-d5fe-5066-847c-f4bb1cd4e361"
version = "0.4.1+1"
[[Xorg_xkbcomp_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libxkbfile_jll"]
git-tree-sha1 = "4bcbf660f6c2e714f87e960a171b119d06ee163b"
uuid = "35661453-b289-5fab-8a00-3d9160c6a3a4"
version = "1.4.2+4"
[[Xorg_xkeyboard_config_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_xkbcomp_jll"]
git-tree-sha1 = "5c8424f8a67c3f2209646d4425f3d415fee5931d"
uuid = "33bec58e-1273-512f-9401-5d533626f822"
version = "2.27.0+4"
[[Xorg_xtrans_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "79c31e7844f6ecf779705fbc12146eb190b7d845"
uuid = "c5fb5394-a638-5e4d-96e5-b29de1b5cf10"
version = "1.4.0+3"
[[ZipFile]]
deps = ["Libdl", "Printf", "Zlib_jll"]
git-tree-sha1 = "c3a5637e27e914a7a445b8d0ad063d701931e9f7"
uuid = "a5390f91-8eb1-5f08-bee0-b1d1ffed6cea"
version = "0.9.3"
[[Zlib_jll]]
deps = ["Libdl"]
uuid = "83775a58-1f1d-513f-b197-d71354ab007a"
[[Zstd_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "cc4bf3fdde8b7e3e9fa0351bdeedba1cf3b7f6e6"
uuid = "3161d3a3-bdf6-5164-811a-617609db77b4"
version = "1.5.0+0"
[[libass_jll]]
deps = ["Artifacts", "Bzip2_jll", "FreeType2_jll", "FriBidi_jll", "JLLWrappers", "Libdl", "Pkg", "Zlib_jll"]
git-tree-sha1 = "acc685bcf777b2202a904cdcb49ad34c2fa1880c"
uuid = "0ac62f75-1d6f-5e53-bd7c-93b484bb37c0"
version = "0.14.0+4"
[[libfdk_aac_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "7a5780a0d9c6864184b3a2eeeb833a0c871f00ab"
uuid = "f638f0a6-7fb0-5443-88ba-1cc74229b280"
version = "0.1.6+4"
[[libpng_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Zlib_jll"]
git-tree-sha1 = "94d180a6d2b5e55e447e2d27a29ed04fe79eb30c"
uuid = "b53b4c65-9356-5827-b1ea-8c7a1a84506f"
version = "1.6.38+0"
[[libvorbis_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Ogg_jll", "Pkg"]
git-tree-sha1 = "c45f4e40e7aafe9d086379e5578947ec8b95a8fb"
uuid = "f27f6e37-5d2b-51aa-960f-b287f2bc3b7a"
version = "1.3.7+0"
[[nghttp2_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "8e850ede-7688-5339-a07c-302acd2aaf8d"
[[p7zip_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "3f19e933-33d8-53b3-aaab-bd5110c3b7a0"
[[x264_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "d713c1ce4deac133e3334ee12f4adff07f81778f"
uuid = "1270edf5-f2f9-52d2-97e9-ab00b5d0237a"
version = "2020.7.14+2"
[[x265_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "487da2f8f2f0c8ee0e83f39d13037d6bbf0a45ab"
uuid = "dfaa095f-4041-5dcd-9319-2fabd8486b76"
version = "3.0.0+3"
[[xkbcommon_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Wayland_jll", "Wayland_protocols_jll", "Xorg_libxcb_jll", "Xorg_xkeyboard_config_jll"]
git-tree-sha1 = "ece2350174195bb31de1a63bea3a41ae1aa593b6"
uuid = "d8fb68d0-12a3-5cfd-a85a-d49703b185fd"
version = "0.9.1+5"
"""
# ╔═╡ Cell order:
# ╠═ada79ea0-f796-11ea-1aa2-eddd747b2c83
# ╟─6269f9b0-f778-11ea-2004-394d3bf5ff7c
# ╠═89e768e0-f775-11ea-02b9-5b07cf88d0b4
# ╠═04c116f0-f77c-11ea-305e-cf13b0d83a8f
# ╟─afabceb0-f778-11ea-0b9a-bb3cbcdf2836
# ╠═d62e2d50-f77b-11ea-0f69-ab7371ff50d6
# ╠═e94d86d0-f783-11ea-0e6c-5d774a41d58c
# ╠═ef8a82f0-f783-11ea-3e6c-d5591033c789
# ╟─85e6c400-f77c-11ea-27f5-e7f9846cadcb
# ╠═021f2dce-f77a-11ea-3bc3-1770604ce10a
# ╠═3417ba30-f786-11ea-2d30-a9df1f9ca6a0
# ╠═c670e330-f776-11ea-22c6-598dabb22ca4
# ╟─dd133a30-f784-11ea-1d08-c9224043fcd8
# ╠═9d70c340-f787-11ea-3b52-4302490de06a
# ╠═e7634a5e-f780-11ea-1bab-8308a9188fd7
# ╟─0a65dba2-f785-11ea-2112-ad09bee3f657
# ╠═a1395970-f786-11ea-3273-c3cabe974073
# ╟─863059b0-f788-11ea-1466-173304238a3c
# ╠═d8490b20-f788-11ea-0dc4-f31128f26958
# ╠═edda218e-f788-11ea-087e-63329996b830
# ╟─207b4070-f789-11ea-2f23-6945d91c5dc7
# ╠═59036480-f78f-11ea-1d45-9f33c884a7d1
# ╟─d3786f40-f793-11ea-28f4-83043da67d5c
# ╠═6b7a0340-f789-11ea-2677-b967a8665cce
# ╠═fbc7ad60-f790-11ea-2396-2d60b71ebb36
# ╟─73a9207e-f78b-11ea-3bd3-29cc530a2cdb
# ╠═7adba030-f78b-11ea-2833-6d4cfe471978
# ╠═a2c3faa0-f792-11ea-01c6-1fa82522f5ea
# ╟─97e74cc0-f794-11ea-25f5-e73f194a7d6e
# ╠═87d1cc50-f796-11ea-1777-1f8e16901fd9
# ╟─a4cd7ee0-f795-11ea-0532-211a33e21562
# ╟─c109a010-f796-11ea-170b-5de0f28d25b3
# ╟─d1048c40-f797-11ea-3c3d-6383874745c1
# ╠═56e84882-f796-11ea-0720-d1ee3698265e
# ╟─13bf3850-f798-11ea-0591-57e00345f4c2
# ╟─e96e21a0-f798-11ea-3f54-ff0b05d1eb35
# ╟─42d03620-f799-11ea-3669-69a09bd1facf
# ╟─c4de4760-f799-11ea-21cf-d9d374c89e67
# ╠═fc1f2d1e-f799-11ea-2606-f5046723c160
# ╠═1aa5b380-f79b-11ea-0a27-b5b7d7091b9f
# ╠═33068bc0-f79b-11ea-1c88-e9d151c04ead
# ╠═cafc4c00-f7a8-11ea-205d-2f425d554032
# ╟─00000000-0000-0000-0000-000000000001
# ╟─00000000-0000-0000-0000-000000000002
| CEDICT | https://github.com/JuliaCJK/CEDICT.jl.git |
|
[
"MIT"
] | 0.3.0 | 724d117af1440b595a40daaff70dcf98b4a06b51 | code | 507 | module CEDICT
export
DictionaryEntry,
traditional_headword, simplified_headword, pinyin_pronunciation, word_senses,
ChineseDictionary,
search_headwords, search_senses, search_pinyin,
idioms
include("dictionary.jl")
include("searching.jl")
"""
idioms([dict])
Retrieves the set of idioms in the provided dictionary (by looking for a label of "(idiom)" in any
of the senses) or in the default dictionary if none provided.
"""
idioms(dict=ChineseDictionary()) = search_senses(dict, "(idiom)")
end
| CEDICT | https://github.com/JuliaCJK/CEDICT.jl.git |
|
[
"MIT"
] | 0.3.0 | 724d117af1440b595a40daaff70dcf98b4a06b51 | code | 3307 | using LazyArtifacts
#===============================================================================
# Dictionary Entries
===============================================================================#
struct DictionaryEntry
trad::String
simp::String
pinyin::String
senses::Vector{String}
end
traditional_headword(entry::DictionaryEntry) = entry.trad
simplified_headword(entry::DictionaryEntry) = entry.simp
pinyin_pronunciation(entry::DictionaryEntry) = entry.pinyin
word_senses(entry::DictionaryEntry) = entry.senses
function Base.print(io::IO, entry::DictionaryEntry)
char_str = entry.trad == entry.simp ? entry.trad : "$(entry.trad) ($(entry.simp))"
print(io, "$char_str: [$(entry.pinyin)]\n")
print(io, join(map(w -> "\t" * w, entry.senses), "\n"))
return nothing
end
#===============================================================================
# Dictionary
===============================================================================#
"""
ChineseDictionary([filename])
Load a text-based dictionary either from the default dictionary file or from the provided
filename. The format of the text file must be the same as
[that used by the CC-CEDICT project](https://cc-cedict.org/wiki/format:syntax) for
compatibility reasons.
For general use, it's the easiest to just use the default dictionary (from the CC-CEDICT project).
This is loaded if you don't specify a filename. This dictionary is updated from the official
project page every so often.
"""
struct ChineseDictionary
entries::Dict{String, Vector{DictionaryEntry}}
metadata::Dict{String, String}
function ChineseDictionary(filename=joinpath(artifact"cedict", "cedict_ts.u8"))
dict = Dict{String, Vector{DictionaryEntry}}()
metadata_dict = Dict{String, String}()
pattern = r"^([^#\s]+) ([^\s]+) \[(.*)\] /(.+)/$"
for line in eachline(filename)
# process lines containing metadata
if startswith(line, "#!") && count(==('='), line) == 1
key, val = split(strip(line[3:end]), "=")
metadata_dict[key] = val
# process lines actually containing dictionary entries
elseif (m = match(pattern, line)) !== nothing
trad, simp, pinyin, defns = String.(m.captures)
entry = DictionaryEntry(trad, simp, pinyin, split(defns, "/"))
dict[trad] = push!(get(dict, trad, []), entry)
simp != trad && (dict[simp] = push!(get(dict, simp, []), entry))
end
end
return new(dict, metadata_dict)
end
end
# iteration
Base.iterate(dict::ChineseDictionary) = iterate(dict.entries)
Base.IteratorSize(::Type{ChineseDictionary}) = HasLength()
Base.IteratorEltype(::Type{ChineseDictionary}) = HasEltype()
Base.length(dict::ChineseDictionary) = length(dict.entries)
Base.eltype(::Type{ChineseDictionary}) = Pair{String, Vector{DictionaryEntry}}
# indexing
Base.getindex(dict::ChineseDictionary, i) = getindex(dict.entries, i)
Base.setindex!(dict::ChineseDictionary, v, i) = setindex!(dict.entries, v, i)
# dictionaries
Base.keys(dict::ChineseDictionary) = keys(dict.entries)
Base.values(dict::ChineseDictionary) = values(dict.entries)
Base.haskey(dict::ChineseDictionary, key) = haskey(dict.entries, key)
| CEDICT | https://github.com/JuliaCJK/CEDICT.jl.git |
|
[
"MIT"
] | 0.3.0 | 724d117af1440b595a40daaff70dcf98b4a06b51 | code | 2953 | using Pipe
"""
search_filtered(func, dict)
Produce a set of entries for which `func(entry)` returns `true`. This is considered
an internal function and not part of the public API for this package; use at your own risk!
"""
function search_filtered(filter_func, dict::ChineseDictionary)
word_entries = Set{DictionaryEntry}()
for entry_list in values(dict)
for entry in entry_list
filter_func(entry) && push!(word_entries, entry)
end
end
word_entries
end
"""
search_headwords(dict, keyword)
Search for the given `keyword` in the dictionary as a headword, in either traditional or
simplified characters (will only return results where the headword is an exact match for
`keyword`; this behavior may change in future releases).
## Examples
```julia-repl
julia> search_headwords(dict, "2019冠狀病毒病") .|> println;
2019冠狀病毒病 (2019冠状病毒病): [er4 ling2 yi1 jiu3 guan1 zhuang4 bing4 du2 bing4]
COVID-19, the coronavirus disease identified in 2019
```
"""
search_headwords(dict::ChineseDictionary, keyword) =
search_filtered(dict) do entry
occursin(keyword, entry.trad) || occursin(keyword, entry.simp)
end
"""
search_senses(dict, keyword)
Search for the given `keyword` in the dictionary among the meanings/senses (the `keyword` must
appear exactly in one or more of the definition senses; this behavior may change in future
releases).
## Examples
```julia-repl
julia> search_senses(dict, "fishnet") .|> println;
漁網 (渔网): [yu2 wang3]
fishing net
fishnet
扳罾: [ban1 zeng1]
to lift the fishnet
網襪 (网袜): [wang3 wa4]
fishnet stockings
```
"""
search_senses(dict::ChineseDictionary, keyword) =
search_filtered(dict) do entry
any(occursin.(keyword, entry.senses))
end
"""
search_pinyin(dict, keyword)
Search the dictionary for terms that fuzzy match the pinyin search key provided.
The language that is understood for the search key is described below.
# Examples
```julia-repl
julia> search_pinyin(dict, "yi2 han4") .|> println;
遺憾 (遗憾): [yi2 han4]
regret
to regret
to be sorry that
julia> search_pinyin(dict, "bang shou") .|> println;
榜首: [bang3 shou3]
top of the list
幫手 (帮手): [bang1 shou3]
helper
assistant
```
"""
function search_pinyin(dict::ChineseDictionary, pinyin_searchkey)
search_regex = _prepare_pinyin_regex(pinyin_searchkey)
search_filtered(dict) do entry
match(search_regex, entry.pinyin) != nothing
end
end
function _prepare_pinyin_regex(searchkey)
re = @pipe split(searchkey, " ") |>
map(w -> (w == "*" ? raw"(\w+\d\s*)*" : w), _) |> # TODO: doesn't handle spaces correctly'
map(w -> (w == "+" ? raw"\w+\d(\s+\w+\d)*" : w), _) |>
map(w -> (w == "?" ? raw"(\w+\d)?" : w), _) |>
map(w -> (endswith(w, r"\d") ? w : w * raw"\d?"), _) |>
join(_, raw"\s+")
Regex("^$(re)\$")
end
| CEDICT | https://github.com/JuliaCJK/CEDICT.jl.git |
|
[
"MIT"
] | 0.3.0 | 724d117af1440b595a40daaff70dcf98b4a06b51 | code | 1206 |
@testset "dictionary loading: tiny" begin
dict = ChineseDictionary("res/tiny_dict.txt")
@test length(dict) == 39
@test all(haskey.(Ref(dict), ["展销会", "反目成仇", "村民", "歐巴桑", "可恃", "戄"]))
end
@testset "dictionary loading: mini" begin
dict = ChineseDictionary("res/mini_dict.txt")
@test length(dict) == 115
@test all(haskey.(Ref(dict), ["仁术", "周遊世界", "和睦", "代數拓撲", "未冠", "棒冰"]))
end
@testset "dictionary loading: small" begin
dict = ChineseDictionary("res/small_dict.txt")
@test length(dict) == 744
@test all(haskey.(Ref(dict), ["代數拓撲", "做事", "優惠券", "优惠券"]))
end
@testset "dictionary headword search" begin
end
@testset "dictionary sense/meaning search" begin
dict = ChineseDictionary("res/tiny_dict.txt")
ids = idioms(dict)
@test length(ids) == 2
villager_defn = first(search_senses(dict, "villager"))
@test traditional_headword(villager_defn) == simplified_headword(villager_defn) == "村民"
@test pinyin_pronunciation(villager_defn) == "cun1 min2"
@test length(word_senses(villager_defn)) == 1
@test first(word_senses(villager_defn)) == "villager"
with_terms = search_senses(dict, "with")
@test length(with_terms) == 2
end
| CEDICT | https://github.com/JuliaCJK/CEDICT.jl.git |
|
[
"MIT"
] | 0.3.0 | 724d117af1440b595a40daaff70dcf98b4a06b51 | code | 531 | using CEDICT
using Test
@testset "all tests" begin
include("dict_dict_tests.jl")
@testset "pinyin fuzzy matching" begin
re = CEDICT._prepare_pinyin_regex("jue2 dai4 shuang1 jiao1")
@test match(re, "jue2 dai4 shuang1 jiao1") !== nothing
@test match(re, "jue2 dai4 shuang1 jiao2") === nothing
@test match(re, "wu2 jue2 dai4 shuang1 jiao1") === nothing
@test match(re, "jue2 shuang1 jiao1") === nothing
@test match(re, "jue2 dai4 shuang1 jiao1 ji4") === nothing
end
end
| CEDICT | https://github.com/JuliaCJK/CEDICT.jl.git |
|
[
"MIT"
] | 0.3.0 | 724d117af1440b595a40daaff70dcf98b4a06b51 | docs | 574 | # NEWS.md - Changes since v0.2.2
## Public API
Four new exported functions for the DictionaryEntry struct:
* traditional_headword
* simplified_headword
* pinyin_pronunciation
* word_senses
These are preferred instead of directly accessing the fields of the struct, as those may change.
* `search_pinyin` function is more capable of using wildcards
## Other Changes (not necessarily public)
* metadata from the dictionary file is also saved (not currently used for anything)
## Behind the Scenes
* more testing of dictionary loading and basic dictionary functionality
| CEDICT | https://github.com/JuliaCJK/CEDICT.jl.git |
|
[
"MIT"
] | 0.3.0 | 724d117af1440b595a40daaff70dcf98b4a06b51 | docs | 993 | # CEDICT.jl
[](https://github.com/JuliaCJK/CEDICT.jl/actions/workflows/tests.yml)
[](https://JuliaCJK.github.io/CEDICT.jl/latest/)
[](https://github.com/JuliaCJK/CEDICT.jl/actions/workflows/nightly.yaml)
A Julia package for programmatically using the CC-CEDICT Chinese-English dictionary. See the [documentation](https://JuliaCJK.github.io/CEDICT.jl/latest/) for details.
## Licensing
This package is provided under the MIT License; however, the required data file from the CC-CEDICT project (supplied as a Pkg artifact) is redistributed under its CC BY-SA 3.0 license. This package provides functions that can modify/build on the original data, so be aware of this especially if used in a commercial setting.
| CEDICT | https://github.com/JuliaCJK/CEDICT.jl.git |
|
[
"MIT"
] | 0.3.0 | 724d117af1440b595a40daaff70dcf98b4a06b51 | docs | 136 | # Convenience Functions
There are some functions (really light wrappers) for certain common functionality.
```@docs
idioms
```
| CEDICT | https://github.com/JuliaCJK/CEDICT.jl.git |
|
[
"MIT"
] | 0.3.0 | 724d117af1440b595a40daaff70dcf98b4a06b51 | docs | 728 | # Creating & Loading Dictionaries
Dictionaries can be loaded using the `ChineseDictionary` constructor. Currently, dictionaries can only be loaded from text files, but there may be support for other formats in the future.
```@docs
ChineseDictionary
```
## File Format for a Text-Based Dictionary
See the [formatting guide for the CC-CEDICT project](https://cc-cedict.org/wiki/format:syntax) for how each line should be formatted (just consider the formatting elements and not necessarily the other notes on translation/dictionary entry creation). Each line of the file should be a single entry; for examples, see the small test dictionaries in the repository. Lines starting with a "#" are treated as comments and ignored.
| CEDICT | https://github.com/JuliaCJK/CEDICT.jl.git |
|
[
"MIT"
] | 0.3.0 | 724d117af1440b595a40daaff70dcf98b4a06b51 | docs | 2432 | # Searching within a Dictionary
There are several ways to search in a dictionary, depending on what part of the dictionary entries are used and what the user is searching for.
(All the examples are using the default dictionary.)
```@docs
search_headwords
search_senses
search_pinyin
```
## Advanced Pinyin Searching
The `search_pinyin` function also supports a certain flavor of fuzzy matching and searches with missing information. For example, tone numbers are not required. In addition,
- "*" will match any additional characters,
- "?" will match up to one additional character, and
- "+" will match one or more additional characters.
If these characters are separated by spaces (not attached to any other word character), "character" means a Chinese character; if these characters are attached to other word characters, "character" means a pinyin character.
For example, using these metacharacters on their own (separated by spaces), we can search where we may not know all the characters in the phrase.
```julia-repl
julia> search_pinyin(dict, "si ma dang huo ma yi") .|> println;
死馬當活馬醫 (死马当活马医): [si3 ma3 dang4 huo2 ma3 yi1]
lit. to give medicine to a dead horse (idiom)
fig. to keep trying everything in a desperate situation
julia> search_pinyin(dict, "si ma dang ? ma yi") .|> println;
死馬當活馬醫 (死马当活马医): [si3 ma3 dang4 huo2 ma3 yi1]
lit. to give medicine to a dead horse (idiom)
fig. to keep trying everything in a desperate situation
julia> search_pinyin(dict, "si + ma yi") .|> println;
死馬當活馬醫 (死马当活马医): [si3 ma3 dang4 huo2 ma3 yi1]
lit. to give medicine to a dead horse (idiom)
fig. to keep trying everything in a desperate situation
julia> search_pinyin(dict, "si ma dang * huo ma yi") .|> println;
死馬當活馬醫 (死马当活马医): [si3 ma3 dang4 huo2 ma3 yi1]
lit. to give medicine to a dead horse (idiom)
fig. to keep trying everything in a desperate situation
```
The above examples all return the same result.
We could also instead use the metacharacters attached to pinyin letters if we don't know the full sound of a word.
## More Advanced Searching
The un-exported method `search_filtered` can be used if none of the above options are powerful/flexible enough. However, this requires working with the raw `DictionaryEntry` struct and is subject to breakage in future releases (not a part of the public API).
```@docs
CEDICT.search_filtered
```
| CEDICT | https://github.com/JuliaCJK/CEDICT.jl.git |
|
[
"MIT"
] | 0.3.0 | 724d117af1440b595a40daaff70dcf98b4a06b51 | docs | 366 | # CEDICT.jl Documentation
Based on the CC-CEDICT project, this package provides convenient ways to programmatically access the dictionary, and do operations like searching, etc.
Still in early development!
```@contents
```
## Installation
This package can be installed in the usual way via Pkg from the General Registry:
```julia-repl
julia> ] add CCEDICT
```
| CEDICT | https://github.com/JuliaCJK/CEDICT.jl.git |
|
[
"MIT"
] | 0.3.0 | 724d117af1440b595a40daaff70dcf98b4a06b51 | docs | 205 | # Examples using the CEDICT.jl package
This directory contains example use cases of this package for learning or analysis.
These examples are all [Pluto.jl](https://github.com/fonsp/Pluto.jl) notebooks.
| CEDICT | https://github.com/JuliaCJK/CEDICT.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 857 | using Documenter, MatrixPolynomials, LinearAlgebra, Statistics
isdefined(Main, :NOPLOTS) && NOPLOTS || include("plots.jl")
makedocs(;
modules=[MatrixPolynomials],
format = Documenter.HTML(assets = ["assets/latex.js"],
mathengine = Documenter.MathJax()),
pages=[
"Home" => "index.md",
"Functions of matrices" => "funcv.md",
"Leja points" => "leja.md",
"Divided differences" => "divided_differences.md",
"Newton polynomials" => "newton_polynomials.md",
"φₖ functions" => "phi_functions.md",
],
repo=Remotes.GitHub("jagot", "MatrixPolynomials.jl"),
sitename="MatrixPolynomials.jl",
authors="Stefanos Carlström <[email protected]>",
doctest=false,
checkdocs=:exports
)
deploydocs(;
repo="github.com/jagot/MatrixPolynomials.jl",
)
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 3769 | using PythonPlot
using Jagot.plotting
plot_style("ggplot")
import MatrixPolynomials: φ₁, φ, std_div_diff, ⏃,
Leja, FastLeja, points, NewtonPolynomial
using SpecialFunctions
function leja()
m = 10
a,b = -2,2
l = Leja(range(a, stop=b, length=1000), m)
fl = FastLeja(a, b, m)
ζ = points(l)
fζ = points(fl)
cfigure("leja") do
csubplot(211,nox=true) do
plot(1:m, ζ, ".-", label="Leja points")
plot(1:m, fζ, ".--", label="Fast Leja points")
ylabel(L"\zeta_m")
legend()
end
csubplot(212) do
plot(1:m, abs.(l.∏ζ).^(1 ./ (1:m)), ".-", label="Leja points")
plot(1:m, abs.(fl.∏ζ).^(1 ./ (1:m)), ".--", label="Fast Leja points")
xlabel(L"m")
ylabel(L"C(\{\zeta_{1:m}\})")
end
end
savefig("docs/src/figures/leja_points.svg")
end
function φ₁_accuracy()
φnaïve(x) = (exp(x) - 1)/x
x = 10 .^ range(-18, stop=0, length=1000)
cfigure("φ₁") do
csubplot(211,nox=true) do
semilogx(x, φnaïve.(x))
semilogx(x, φ₁.(x), "--")
end
csubplot(212) do
loglog(x, abs.(φnaïve.(x) - φ₁.(x))./abs.(φ₁.(x)))
xlabel(L"x")
ylabel("Relative error")
end
end
savefig("docs/src/figures/phi_1_accuracy.svg")
end
function φₖ_accuracy()
x = vcat(0,10 .^ range(-18,stop=2.5,length=1000))
cfigure("φ") do
for k = 100:-1:0
loglog(x, φ.(k,x))
end
xlabel(L"x")
end
savefig("docs/src/figures/phi_k_accuracy.svg")
function φnaïve(k,z)
if k == 0
exp(z)
elseif k == 1
(exp(z)-1)/z
else
(φnaïve(k-1,z) - 1/gamma(k))/z
end
end
x = vcat(0,10 .^ range(-18,stop=0,length=1000))
cfigure("φ naïve") do
for k = 4:-1:0
loglog(x, φnaïve.(k,x))
end
ylim(1e-3,10)
xlabel(L"x")
end
savefig("docs/src/figures/phi_k_naive_accuracy.svg")
end
function div_differences_cancellation()
x = range(-2, stop=2, length=100)
ξ = collect(x)
f = exp
d_std = @time std_div_diff(f, ξ, 1, 0, 1)
d_std_big = @time std_div_diff(f, big.(ξ), 1, 0, 1)
d_auto = @time ⏃(f, ξ, 1, 0, 1)
cfigure("div differences cancellation") do
loglog(d_std, label="Recursive")
loglog(Float64.(d_std_big), label="Recursive, BigFloat")
loglog(d_auto, "--", color="black", label="Taylor series")
xlabel(L"j")
ylabel(L"\Delta\!\!\!|\,(\zeta_{1:j})\exp")
end
legend(framealpha=0.75)
savefig("docs/src/figures/div_differences_cancellation.svg")
end
function div_differences_sine()
μ = 10.0 # Extent of interval
m = 40 # Number of Leja points
ζ = points(Leja(μ*range(-1,stop=1,length=1000),m))
d = ⏃(sin, ζ, 1, 0, 1)
np = NewtonPolynomial(sin, ζ)
x = range(-μ, stop=μ, length=1000)
f_np = np.(x)
f_exact = sin.(x)
cfigure("div differences sine") do
csubplot(211, nox=true) do
plot(x, f_np, "-", label=L"$\sin(x)$ approximation")
plot(x, f_exact, "--", label=L"\sin(x)")
legend()
end
csubplot(212) do
semilogy(x, abs.(f_np - f_exact), label="Absolute error")
semilogy(x, abs.(f_np - f_exact)./abs.(f_exact), label="Relative error")
xlabel(L"x")
legend()
end
end
savefig("docs/src/figures/div_differences_sine.svg")
end
macro echo(expr)
println(expr)
:(@time $expr)
end
@info "Documentation plots"
mkpath("docs/src/figures")
@echo leja()
@echo φ₁_accuracy()
@echo φₖ_accuracy()
@echo div_differences_cancellation()
@echo div_differences_sine()
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 570 | module MatrixPolynomials
using Parameters
using LinearAlgebra
using ArnoldiMethod
using ArnoldiMethod: SR, SI, LR, LI
using SpecialFunctions
const Γ = gamma
const lnΓ = loggamma
using Statistics
using UnicodeFun
using Formatting
using Compat
include("spectral_shapes.jl")
include("spectral_ranges.jl")
include("leja.jl")
include("fast_leja.jl")
include("phi_functions.jl")
include("matrix_closures.jl")
include("taylor_series.jl")
include("propagate_divided_differences.jl")
include("divided_differences.jl")
include("newton.jl")
include("funcv.jl")
end # module
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 4531 | """
std_div_diff(f, ζ, h, c, γ)
Compute the divided differences of `f` at `h*(c .+ γ*ζ)`, where `ζ` is
a vector of (possibly complex) interpolation points, using the
standard recursion formula.
"""
function std_div_diff(f, ζ::AbstractVector{T}, h, c, γ) where T
m = length(ζ)
d = Vector{T}(undef, m)
for i = 1:m
d[i] = f(h*(c + γ*ζ[i]))
for j = 2:i
d[i] = (d[i]-d[j-1])/(ζ[i]-ζ[j-1])
end
end
d
end
"""
ts_div_diff_table(f, ζ, h, c, γ; kwargs...)
Compute the divided differences of `f` at `h*(c .+ γ*ζ)`, where `ζ` is
a vector of (possibly complex) interpolation points, by forming the
full divided differences table using the Taylor series of `f(H)`
(computed using [`taylor_series`](@ref)). If there is a scaling
relationship available for `f`, the Taylor series of `f(τ*H)` is
computed instead, and the full solution is recovered using
[`propagate_div_diff`](@ref).
"""
function ts_div_diff_table(f, ζ::AbstractVector{T}, h, c, γ; kwargs...) where T
ts = taylor_series(f)
m = length(ζ)
Ζ = Bidiagonal(ζ, ones(T, m-1), :L)
H = h*(c*I + γ*Ζ)
# Scale the step taken, if there is a functional relationship for
# f which permits this.
xmax = maximum(ζᵢ -> abs(h*(c + γ*ζᵢ)), ζ)
J = num_steps(f, xmax)
τ = one(T)/J
fH = ts(τ*H; kwargs...)
if J > 1
propagate_div_diff(f, fH, J, H, τ)
else
fH[:,1]
end
end
"""
⏃(f, ζ, args...)
Compute the divided differences of `f` at `ζ`, using a method that is
optimized for the function `f`, if one is available, otherwise
fallback to [`MatrixPolynomials.ts_div_diff_table`](@ref).
"""
⏃(args...) = ts_div_diff_table(args...)
# * Special cases for φₖ(z)
# These are linear fits that are always above the values of Table 3.1 of
#
# - Al-Mohy, A. H., & Higham, N. J. (2011). Computing the action of the
# matrix exponential, with an application to exponential
# integrators. SIAM Journal on Scientific Computing, 33(2),
# 488–511. http://dx.doi.org/10.1137/100788860
"""
min_degree(::typeof(exp), θ)
Minimum degree of Taylor polynomial to represent `exp` to machine
precision, within a circle of radius `θ`.
"""
min_degree(::typeof(exp), θ::Float64) =
ceil(Int, 4.1666θ + 15.0)
min_degree(::typeof(exp), θ::Float32) =
ceil(Int, 3.7037θ + 6.8519)
"""
taylor_series(::Type{T}, ::typeof(exp), n; s=1, θ=3.5) where T
Compute the Taylor series of `exp(z/s)`, with `n` terms, or as many
terms as required to achieve convergence within a circle of radius
`θ`, whichever is largest.
"""
function taylor_series(::Type{T}, ::typeof(exp), n; s=1, θ=3.5) where T
N = max(n, min_degree(exp, θ))
vcat(one(T), one(T) ./ [Γ(s*k+1) for k = 1:N])
end
"""
div_diff_table_basis_change(f, ζ[; kwargs...])
Construct the table of divided differences of `f` at the interpolation
points `ζ`, based on the algorithm on page 26 of
- Zivcovich, F. (2019). Fast and accurate computation of divided
differences for analytic functions, with an application to the
exponential function. Dolomites Research Notes on Approximation,
12(1), 28–42.
"""
function div_diff_table_basis_change(f, ζ::AbstractVector{T}; s=1, kwargs...) where T
n = length(ζ)-1
ts = taylor_series(T, f, n+1; s=s, kwargs...)
N = length(ts)-1
F = zeros(T, n+1, n+1)
for i = 1:n
F[i+1:n+1,i] .= ζ[i] .- ζ[i+1:n+1]
end
for j = n:-1:0
for k = N:-1:(n-j+1)
ts[k] += ζ[j+1]*ts[k+1]
end
for k = (n-j):-1:1
ts[k] += F[k+j+1,j+1]*ts[k+1]
end
F[j+1,j+1:n+1] .= ts[1:n-j+1]
end
F[1:n+2:(n+1)^2] .= f.(ζ/s)
UpperTriangular(F)
end
"""
φₖ_div_diff_basis_change(k, ζ[; θ=3.5, s=1])
Specialized interface to [`div_diff_table_basis_change`](@ref) for the `φₖ`
functions. `θ` is the desired radius of convergence of the Taylor
series of `φₖ`, and `s` is the scaling-and-squaring parameter, which
if set to zero, will be calculated to fulfill `θ`.
"""
function φₖ_div_diff_basis_change(k, ζ::AbstractVector{T}; θ=real(T(3.5)), s=1) where T
μ = mean(ζ)
z = vcat(zeros(k), ζ) .- μ
n = length(z) - 1
# Scaling
if s == 0
Δz = maximum(a -> maximum(b -> abs(a-b), z), z)
s = max(1, ceil(Int, Δz/θ))
end
# The Taylor series of φₖ is just a shifted version of exp.
F = div_diff_table_basis_change(exp, z; s=s, θ=θ)
dd = F[1,:]
# Squaring
for j = 1:s-1
lmul!(F', dd)
end
exp(μ)*dd[k+1:end]
end
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 2251 | """
Leja(ζ, ∏ζ, ζs, ia, ib)
Generate the approximate Leja points `ζ` along a line; `∏ζ[i]` is the
product of the distances of `ζ[i]`, and `ζs` are candidate points.
The quality of the fast Leja points for large amounts is not dependent
on a preexisting discretization of a set, as is the case for
[`Leja`](@ref), however fast Leja points are restricted to lying on a
line in the complex plane instead.
This is a Julia port of the Matlab algorithm published in
- Baglama, J., Calvetti, D., & Reichel, L. (1998). Fast Leja
points. Electron. Trans. Numer. Anal, 7(124-140), 119–120.
"""
struct FastLeja{T}
ζ::Vector{T}
∏ζ::Vector{T}
ζs::Vector{T}
ia::Vector{Int}
ib::Vector{Int}
end
meanζ(ζ) = (i,j) -> (ζ[i]+ζ[j])/2
"""
fast_leja!(fl::FastLeja, n)
Generate `n` fast Leja points, can be used add more fast Leja points
to an already formed sequence.
"""
function fast_leja!(fl::FastLeja, n)
@unpack ζ, ∏ζ, ζs, ia, ib = fl
mζ = meanζ(ζ)
curn = length(ζ)
if curn < n
resize!(ζ, n)
resize!(∏ζ, n)
resize!(ζs, n)
resize!(ia, n)
resize!(ib, n)
end
for i = curn+1:n
maxi = argmax(abs.(view(∏ζ, 1:i-2)))
ζ[i] = ζs[maxi]
ia[i-1] = i
ib[i-1] = ib[maxi]
ib[maxi] = i
ζs[maxi] = mζ(ia[maxi], ib[maxi])
ζs[i-1] = mζ(ia[i-1], ib[i-1])
sel = 1:i-1
∏ζ[maxi] = prod(ζs[maxi] .- ζ[sel])
∏ζ[i-1] = prod(ζs[i-1] .- ζ[sel])
∏ζ[sel] .*= ζs[sel] .- ζ[i]
end
fl
end
"""
FastLeja(a, b, n)
Generate the `n` first approximate Leja points along the line `a–b`.
"""
function FastLeja(a, b, n)
T = float(promote_type(typeof(a),typeof(b)))
ζ = zeros(T, 3)
∏ζ = zeros(T, 3)
ζs = zeros(T, 3)
ia = zeros(Int, 3)
ib = zeros(Int, 3)
ζ[1:2] = abs(a) > abs(b) ? [a,b] : [b,a]
ζ[3] = (a+b)/2
mζ = meanζ(ζ)
ζs[1] = mζ(2,3)
ζs[2] = mζ(3,1)
∏ζ[1] = prod(ζs[1] .- ζ[1:3])
∏ζ[2] = prod(ζs[2] .- ζ[1:3])
ia[1] = 2
ib[1] = 3
ia[2] = 3
ib[2] = 1
fl = FastLeja(ζ, ∏ζ, ζs, ia, ib)
fast_leja!(fl, n)
end
"""
points(fl::FastLeja)
Return the fast Leja points generated so far.
"""
points(fl::FastLeja) = fl.ζ
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 3407 | """
FuncV{f,T}(s⁻¹Amc, c, s)
Structure for applying the action of a polynomial approximation of the
function `f` with a matrix argument acting on a vector, i.e. `w ←
p(A)*v` where `p(z) ≈ f(z)`. Various properties of `f` may be used,
such as shifting and scaling of the argument, to improve convergence
and/or accuracy. `s⁻¹Amc` is the shifted linear operator ``s⁻¹(A-c)``, `c` and `s`
are the shift and scaling, respectively.
"""
struct FuncV{f,Op,P,C,S}
"Shifted and scaled linear operator that is iterated"
s⁻¹Amc::Op
"Polynomial approximation of `f`"
p::P
"Numeric shift employed in iterations"
c::C
"Scaling employed in iterations"
s::S
FuncV(f::Function, s⁻¹Amc::Op, p::P, c::C, s::S) where {Op,P,C,S} =
new{f,Op,P,C,S}(s⁻¹Amc, p, c, s)
end
Base.size(f::FuncV, args...) = size(f.s⁻¹Amc, args...)
# For arbitrary functions, we do not scale or shift, since there are
# no universal scaling and/or shifting laws.
scaling(::Function, λ) = 1
shift(::Function, λ) = 0
scale(A, h) = isone(h) ? A : h*A
shift(A, c) = iszero(c) ? A : A - I*c
"""
FuncV(f, A, m[, t=1; distribution=:leja, kwargs...])
Create a [`FuncV`](@ref) that is used to compute `f(t*A)*v` using a
polynomial approximation of `f` formed using `m` interpolation points.
`kwargs...` are passed on to [`spectral_range`](@ref) which estimates
the range over which `f` has to be interpolated.
"""
function FuncV(f::Function, A, m::Integer, t=one(eltype(A));
distribution=:leja, leja_multiplier=100,
λ=nothing, scale_and_shift=true,
tol=1e-15, spectral_fun=identity, kwargs...)
if isnothing(λ)
λ = spectral_range(t, spectral_fun(A); kwargs...)
end
ζ = if distribution == :leja
points(Leja(range(λ, m*leja_multiplier), m))
elseif distribution == :fast_leja
points(FastLeja(λ.a, λ.b, m))
else
throw(ArgumentError("Invalid distribution of interpolation points $(distribution); valid choices are :leja and :fast_leja"))
end
At = scale(A, t)
s,c,s⁻¹Amc = if scale_and_shift
s = scaling(f, λ)
c = shift(f, λ)
s⁻¹Amc = scale(shift(At, c), 1/s)
s,c,s⁻¹Amc
else
1, 0, At
end
n = size(A,1)
p = if n > 1
d = ⏃(f, ζ, 1, 0, 1)
np = NewtonPolynomial(ζ, d)
NewtonMatrixPolynomial(np, n, error_estimator(f, np, n, tol))
else
@warn "Scalar case, no interpolation polynomial necessary"
nothing
end
FuncV(f, s⁻¹Amc, p, c, s)
end
function Base.show(io::IO, funcv::FuncV{f}) where f
write(io, "$(funcv.p) of $f")
end
# This does not yet consider substepping
matvecs(f::FuncV) = matvecs(f.p)
unshift!(w, funcv::FuncV) = @assert iszero(funcv.c)
single_step!(w, funcv::FuncV, v, α::Number=true) =
mul!(w, funcv.p, funcv.s⁻¹Amc, v, α)
"""
mul!(w, funcv::FuncV, v)
Evaluate the action of the matrix polynomial `funcv` on `v` and store
the result in `w`.
"""
function LinearAlgebra.mul!(w, funcv::FuncV, v, α::Number=true, β::Number=false)
@assert iszero(β)
single_step!(w, funcv, v, α)
unshift!(w, funcv)
end
scalar(A::AbstractMatrix) = A[1]
function scalar(A)
v = ones(eltype(A),1)
w = similar(v)
mul!(w, A, v)[1]
end
function single_step!(w, funcv::FuncV{f,<:Any,Nothing}, v) where f
copyto!(w, v)
lmul!(f(scalar(funcv.s⁻¹Amc)), w)
end
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 1872 | """
Leja(S, ζ, ∏ζ)
Generate the Leja points `ζ` from the discretized set `S`; `∏ζ[i]` is
the product of the distances of `ζ[i]` to all preceding points, it can
be used to estimate the capacity of the set `S`.
This is an implementation of the algorithm described in
- Reichel, L. (1990). Newton Interpolation At Leja Points. BIT, 30(2),
332–346. [DOI: 10.1007/bf02017352](http://dx.doi.org/10.1007/bf02017352)
"""
struct Leja{T}
S::Vector{T}
ζ::Vector{T}
∏ζ::Vector{T}
end
"""
leja!(l::Leja, n)
Generate `n` Leja points in the [`Leja`](@ref) sequence `l`, i.e. can
be used to add more Leja points an already formed sequence. Cannot
generate more Leja points than the underlying discretization `l.S`
contains; furthermore, the quality of the Leja points may deteriorate
when `n` approaches `length(l.S)`.
"""
function leja!(l::Leja, n)
@unpack S,ζ,∏ζ = l
curn = length(ζ)
n-curn > length(S) &&
throw(DimensionMismatch("Cannot generate more Leja points than underlying discretization contains"))
if curn < n
resize!(ζ, n)
resize!(∏ζ, n)
end
if curn == 0 && n > 0
maxi = argmax(eachindex(S))
ζ[1] = S[maxi]
deleteat!(S, maxi)
∏ζ[1] = 0
curn += 1
end
for i = curn+1:n
maxi = argmax(eachindex(S)) do j
ζs = S[j]
prod(ζₖ -> abs(ζs-ζₖ), view(ζ, 1:i-1))
end
ζ[i] = S[maxi]
deleteat!(S, maxi)
∏ζ[i] = prod(j -> abs(ζ[j]-ζ[i]), 1:i-1)
end
l
end
"""
Leja(S, n)
Construct a Leja sequence generator from the discretized set `S` and
generate `n` Leja points.
"""
function Leja(S::AbstractVector{T}, n::Integer) where T
l = Leja(collect(S), Vector{T}(), Vector{T}())
leja!(l, n)
end
"""
points(l::Leja)
Return the Leja points generated so far.
"""
points(l::Leja) = l.ζ
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 1375 | """
closure(x::Number)
Generates the closure type of `xⁿ` as `n → ∞`, i.e. a scalar.
"""
closure(::T) where {T<:Number} = T
for Mat = [:Matrix, :Diagonal, :LowerTriangular, :UpperTriangular]
docstring = """
closure(x::$Mat)
Generates the closure type of `xⁿ` as `n → ∞`, i.e. a `$Mat`.
"""
@eval begin
@doc $docstring
closure(::M) where {M<:$Mat} = M
end
end
for Mat = [:Tridiagonal, :SymTridiagonal]
docstring = """
closure(x::$Mat)
Generates the closure type of `xⁿ` as `n → ∞`, i.e. a `Matrix`.
"""
@eval closure(::$Mat{T}) where T = Matrix{T}
end
"""
closure(x::Bidiagonal)
Generates the closure type of `xⁿ` as `n → ∞`, i.e. a
`UpperTriangular` or `LowerTriangular`, depending on `x.uplo`.
"""
function closure(B::Bidiagonal{T}) where T
if B.uplo == 'L'
LowerTriangular{T,Matrix{T}}
else
UpperTriangular{T,Matrix{T}}
end
end
function Base.zero(::Type{Mat}, m, n) where {T,Mat<:Diagonal{T}}
@assert m == n
Mat(zeros(m))
end
function Base.zero(::Type{Mat}, m, n) where {T,Mat<:Tridiagonal{T}}
@assert m == n
Mat(zeros(T,m-1),zeros(T,m),zeros(T,m-1))
end
function Base.zero(::Type{Mat}, m, n) where {T,Mat<:SymTridiagonal{T}}
@assert m == n
Mat(zeros(T,m),zeros(T,m-1))
end
Base.zero(::Type{Mat}, m, n) where {T,Mat<:AbstractMatrix{T}} = Mat(zeros(T, m, n))
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 8646 | # * Scalar Newton polynomial
@doc raw"""
NewtonPolynomial(ζ, d)
The unique interpolation polynomial of a function in its Newton form,
i.e.
```math
f(z) \approx p(z) = \sum_{j=1}^m \divdiff(\zeta_{1:j})f \prod_{i=1}^{j-1}(z - \zeta_i),
```
where `ζ` are the interpolation points and
`d[j]=⏃(ζ[1:j])f` is the ``j``th divided difference of the
interpolated function `f`.
"""
struct NewtonPolynomial{T,ZT<:AbstractVector{T},DT<:AbstractVector{T}}
"Interpolation points of the Newton polynomial"
ζ::ZT
"Divided differences for the function interpolated by the Newton polynomial"
d::DT
end
"""
NewtonPolynomial(f, ζ)
Construct the Newton polynomial interpolating `f` at `ζ`,
automatically deriving the divided differences using [`⏃`](@ref).
"""
NewtonPolynomial(f::Function, ζ::AbstractVector) =
NewtonPolynomial(ζ, ⏃(f, ζ, 1, 0, 1))
Base.view(np::NewtonPolynomial, args...) =
NewtonPolynomial(view(np.ζ, args...), view(np.d, args...))
"""
(np::NewtonPolynomial)(z[, error=false])
Evaluate the Newton polynomial `np` at `z`. If `error` is set to
`true`, a second return value will contain an estimate of the error in
the polynomial approximation.
"""
function (np::NewtonPolynomial{T})(z::Number, error) where T
update_error = zero(real(T))
p = np.d[1]
r = z - np.ζ[1]
m = length(np.ζ)
for i = 2:m
p += np.d[i]*r
error && (update_error = abs(np.d[i])*norm(r))
r *= z - np.ζ[i]
end
p,update_error
end
(np::NewtonPolynomial)(z) = first(np(z,false))
function Base.show(io::IO, np::NewtonPolynomial)
degree = length(np.ζ)-1
ar,br = extrema(real(np.ζ))
ai,bi = extrema(imag(np.ζ))
compl_str(r,i) = if iszero(i)
r
elseif iszero(r) && iszero(i)
0
else
r + im*i
end
write(io, "Newton polynomial of degree $(degree) on $(compl_str(ar,ai))..$(compl_str(br,bi))")
end
# * Newton matrix polynomial
"""
NewtonMatrixPolynomial(np, pv, r, Ar, error, m)
This structure aids in the computation of the action of a matrix
polynomial on a vector. `np` is the [`NewtonPolynomial`](@ref), `pv`
is the desired result, `r` and `Ar` are recurrence vectors, and
`error` is an optional error estimator algorithm that can be used to
terminate the iterations early. `m` records how many matrix–vector
multiplications were used when evaluating the matrix polynomial.
"""
mutable struct NewtonMatrixPolynomial{T,NP<:NewtonPolynomial{T},Vec,ErrorEstim}
np::NP
"Action of the Newton polynomial `np` on a vector `v`"
pv::Vec
"Recurrence vector"
r::Vec
"Matrix–Recurrence vector product"
Ar::Vec
error::ErrorEstim
m::Int
end
function NewtonMatrixPolynomial(np::NewtonPolynomial{T}, n::Integer, res=nothing) where T
pv = Vector{T}(undef, n)
r = Vector{T}(undef, n)
Ar = Vector{T}(undef, n)
NewtonMatrixPolynomial(np, pv, r, Ar, res, 0)
end
function Base.show(io::IO, nmp::NewtonMatrixPolynomial)
n = length(nmp.r)
write(io, "$(n)×$(n) matrix $(nmp.np)")
end
matvecs(nmp::NewtonMatrixPolynomial) = nmp.m + matvecs(nmp.error)
estimate_converged!(::Nothing, args...) = false
matvecs(::Nothing) = 0
"""
mul!(w, p::NewtonMatrixPolynomial, A, v)
Compute the action of the [`NewtonMatrixPolynomial`](@ref) `p`
evaluated for the matrix (or linear operator) `A` acting on `v` and
storing the result in `w`, i.e. `w ← p(A)*v`.
"""
function LinearAlgebra.mul!(w, nmp::NewtonMatrixPolynomial, A, v, α::Number=true)
# Equation numbers refer to
#
# - Kandolf, P., Ostermann, A., & Rainer, S. (2014). A residual based
# error estimate for leja interpolation of matrix functions. Linear
# Algebra and its Applications, 456(nil),
# 157–173. http://dx.doi.org/10.1016/j.laa.2014.04.023
@unpack pv,r,Ar = nmp
@unpack d,ζ = nmp.np
nmp.m = 0
pv .= d[1] .* v # Equation (3c)
r .= v # r is initialized using the normal iteration, below
m = length(ζ)
for i = 2:m
# Equations (3a,b) are applied in reverse order, since at the
# beginning of each iteration, r is actually lagging one
# iteration behind, because r is initialized to v, not
# (A-ζ[1])*v.
# Equation (3b)
mul!(Ar, A, r)
isone(α) || lmul!(α, Ar)
nmp.m += 1
lmul!(-ζ[i-1], r)
r .+= Ar
# Equation (3a)
BLAS.axpy!(d[i], r, pv)
estimate_converged!(nmp.error, A, pv, v, Ar, i-1) && break
end
w .= pv
end
# ** Newton matrix polynomial derivative
"""
NewtonMatrixPolynomialDerivative(np, p′v, r′, Ar′, m)
This strucyture aids in the computation of the first derivative of the
[`NewtonPolynomial`](@ref) `np`. It is to be used in lock-step with
the evaluation of the polynomial, i.e. when evaluating the ``m``th
degree of `np`, this structure will provide the first derivative of
the ``m``th degree polynomial, storing the result in `p′v`. `r′` and
`Ar′` are recurrence vectors. Its main application is in
[`φₖResidualEstimator`](@ref). `m` records how many matrix–vector
multiplications were used when evaluating the matrix polynomial.
"""
mutable struct NewtonMatrixPolynomialDerivative{T,NP<:NewtonPolynomial{T},Vec}
np::NP
"Action of the time-derivative of the Newton polynomial `np` on a vector `v`"
p′v::Vec
"Recurrence vector"
r′::Vec
"Matrix–Recurrence vector product"
Ar′::Vec
m::Int
end
function NewtonMatrixPolynomialDerivative(np::NewtonPolynomial{T}, n::Integer) where T
p′v = Vector{T}(undef, n)
r′ = Vector{T}(undef, n)
Ar′ = Vector{T}(undef, n)
NewtonMatrixPolynomialDerivative(np, p′v, r′, Ar′, 0)
end
matvecs(nmpd::NewtonMatrixPolynomialDerivative) = nmpd.m
function step!(nmpd::NewtonMatrixPolynomialDerivative, A, Ar, i)
@unpack p′v,r′,Ar′ = nmpd
@unpack d,ζ = nmpd.np
if i == 1
# Equation (18c)
p′v .= false
copyto!(r′, Ar)
nmpd.m = 0
end
# Equation (18a)
BLAS.axpy!(d[i], r′, p′v)
# Equation (18b)
mul!(Ar′, A, r′)
nmpd.m += 1
lmul!(-ζ[i], r′)
r′ .+= Ar′
p′v
end
# ** Residual error estimator for φₖ functions
"""
φₖResidualEstimator{T,k}(nmpd, ρ, vscaled, estimate, tol)
An implementation of the residual error estimate of the φₖ functions,
as presented in
- Kandolf, P., Ostermann, A., & Rainer, S. (2014). A residual based
error estimate for Leja interpolation of matrix functions. Linear
Algebra and its Applications, 456(nil), 157–173. [DOI:
10.1016/j.laa.2014.04.023](http://dx.doi.org/10.1016/j.laa.2014.04.023)
`nmpd` is a [`NewtonMatrixPolynomialDerivative`](@ref) that
successively computes the time-derivative of the
[`NewtonMatrixPolynomial`](@ref) used to interpolate ``\\varphi_k(tA)``
(the time-step ``t`` is subsequently set to unity), `ρ` is the
residual vector, `vscaled` an auxiliary vector for `k>0`, and
`estimate` and `tol` are the estimated error and tolerance,
respectively.
"""
mutable struct φₖResidualEstimator{T,k,NMPD<:NewtonMatrixPolynomialDerivative{T},Vec,R}
nmpd::NMPD
"Residual vector"
ρ::Vec
"``v/(k-1)!`` cache"
vscaled::Vec
estimate::R
tol::R
m::Int
end
φₖResidualEstimator(k, nmpd::NMPD, ρ::Vec, vscaled::Vec, estimate::R, tol::R) where {T,NMPD<:NewtonMatrixPolynomialDerivative{T},Vec,R} =
φₖResidualEstimator{T,k,NMPD,Vec,R}(nmpd, ρ, vscaled, estimate, tol, 0)
function φₖResidualEstimator(k::Integer, np::NewtonPolynomial{T}, n::Integer, tol::R) where {T,R<:AbstractFloat}
nmpd = NewtonMatrixPolynomialDerivative(np, n)
ρ = Vector{T}(undef, n)
vscaled = Vector{T}(undef, k > 0 ? n : 0)
φₖResidualEstimator(k, nmpd, ρ, vscaled, convert(R, Inf), tol)
end
matvecs(error::φₖResidualEstimator) = error.m + matvecs(error.nmpd)
function estimate_converged!(error::φₖResidualEstimator{T,k}, A, pv, v, Ar, m) where {T,k}
@unpack ρ, vscaled = error
if m == 1
error.m = 0
if k > 0
vscaled .= v/Γ(k)
end
end
mul!(ρ, A, pv)
error.m += 1
ρ .-= step!(error.nmpd, A, Ar, m)
# # TODO: Figure out why this does not work as intended.
# if k > 0
# ρ .+= vscaled
# ρ .-= k*pv
# @. ρ += vscaled - k*pv
# end
error.estimate = norm(ρ)
if k > 0
error.estimate /= k
end
error.estimate < error.tol
end
error_estimator(::typeof(exp), args...) = φₖResidualEstimator(0, args...)
error_estimator(::typeof(φ₁), args...) = φₖResidualEstimator(1, args...)
error_estimator(fix::Base.Fix1{typeof(φ),<:Integer}, args...) = φₖResidualEstimator(fix.x, args...)
export error_estimator
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 2576 | @doc raw"""
φ₁(z)
Special case of [`φ`](@ref) for `k=1`, taking care to avoid numerical
rounding errors for small ``|z|``.
"""
function φ₁(z::T) where T
if abs(z) < eps(real(T))
one(T)
else
y = exp(z)
if abs(z) ≥ one(real(T))
(y - 1)/z
else
(y-1)/log(y)
end
end
end
@doc raw"""
φ(k, z)
Compute the entire function ``\varphi_k(z)``, ``z\in\mathbb{C}``,
which is recursively defined as [Eq. (2.11) of
[Hochbruck2010](http://dx.doi.org/10.1017/s0962492910000048)]
```math
\varphi_{k+1}(z) \equiv \frac{\varphi_k(z)-\varphi_k(0)}{z},
```
with the base cases
```math
\varphi_{0}(z) = \exp(z), \quad
\varphi_{1}(z) = \frac{\exp(z)-1}{z},
```
and the special case
```math
\varphi_k(0) = \frac{1}{k!}.
```
This function, as the base case [`φ₁`](@ref), is implemented to avoid
rounding errors for small ``|z|``.
"""
function φ(k, z::T) where T
if k == 0
exp(z)
elseif k == 1
φ₁(z)
else
abs(z) < eps(real(T)) && return 1/gamma(k+1)
# Eq. (2.11) of
#
# - Hochbruck, M., & Ostermann, A. (2010). Exponential Integrators. Acta
# Numerica, 19(nil),
# 209–286. http://dx.doi.org/10.1017/s0962492910000048
#
if abs(z) > k*one(real(T))
(φ(k-1, z) - φ(k-1, zero(T)))/z
else
# Horner's rule applied to the Taylor expansion of φₖ = ∑ zⁱ/(k+i)!
# Truncating the Taylor expansion at k+1 terms seems to work well.
n = 10k
b = one(T)/gamma(k+n+1)
for i = n-1:-1:0
b = muladd(b, z, 1/gamma(k+i+1))
end
b
end
end
end
"""
φ(k)
Return a function corresponding to `φₖ`.
# Examples
```jldoctest
julia> φ(0)
exp (generic function with 14 methods)
julia> φ(1)
φ₁ (generic function with 1 method)
julia> φ(2)
φ₂ (generic function with 1 method)
julia> φ(15)
φ₁₅ (generic function with 1 method)
julia> φ(15)(5.0 + im)
1.0931836313419128e-12 + 9.301475570434819e-14im
```
"""
function φ(k::Integer)
if k == 0
exp
elseif k == 1
φ₁
else
Base.Fix1(φ, k)
end
end
Base.string(f::Base.Fix1{typeof(φ),<:Integer}) = "φ$(to_subscript(f.x))"
function Base.show(io::IO, f::Base.Fix1{typeof(φ),<:Integer})
write(io, "φ")
write(io, to_subscript(f.x))
n = length(methods(f))
write(io, " (generic function with $n method$(n > 1 ? "s" : ""))")
end
Base.show(io::IO, ::MIME"text/plain", f::Base.Fix1{typeof(φ),<:Integer}) =
show(io, f)
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 2881 | num_steps(f, xmax) = 1
num_steps(::Union{typeof(exp),typeof(φ₁)# ,Base.Fix1{typeof(φ),<:Integer}
}, xmax) =
max(1, ceil(Int, xmax/0.3)) # This number is more conservative
# that actually is necessary, however,
# since ts_div_diff_table currently is
# implemented via repeated matrix
# powers, truncation of very small
# numbers occur. With a proper routine
# for powers of Bidiagonal matrices
# (c.f. McCurdy 1984), this value can
# be increased.
function num_steps(::Union{typeof(sin),typeof(cos)}, xmax)
J = 1
while xmax/J > 1.59
J = nextpow(2, J+1)
end
J
end
"""
propagate_div_diff(::typeof(exp), expτH, J, args...)
Find the divided differences of `exp` by utilizing that
``\\exp(a+b)=\\exp(a)\\exp(b)``.
"""
function propagate_div_diff(::typeof(exp), expτH, J, args...)
d = expτH[:,1]
for j = 1:J-1
lmul!(expτH, d)
end
d
end
@doc raw"""
propagate_div_diff(::typeof(φ₁), φ₁H, J, H, τ)
Find the divided differences of `φ₁` by solving the ODE
```math
\dot{\vec{y}}(t) = \mat{H} \vec{y}(t) + \vec{e}_1, \quad \vec{y}(0) = 0,
```
by iterating
```math
\vec{y}_{j+1} = \vec{y}_j + \tau\varphi_1(\tau\mat{H})(\mat{H}\vec{y}_j + \vec{e}_1),
\quad j=0,...,J-1.
```
"""
function propagate_div_diff(::typeof(φ₁), φ₁H, J, H, τ)
d = φ₁H[:,1]
tmp = similar(d)
lmul!(τ, d)
for j = 1:J-1
mul!(tmp, H, d)
tmp[1] += 1
d .+= lmul!(τ, lmul!(φ₁H, tmp))
end
d
end
@doc raw"""
propagate_div_diff_sin_cos(sinH, cosH, J)
Find the divided differences tables of `sin` and `cos` simultaneously,
by utilizing the double-angle formulæ
```math
\sin2\theta = 2\sin\theta\cos\theta, \quad
\cos2\theta = 1 - \sin^2\theta,
```
recursively, doubling the angle at each iteration until the desired
angle is achieved.
"""
function propagate_div_diff_sin_cos(sinH, cosH, J)
S = 2sinH*cosH
C = I - 2sinH^2
while J > 2
tmp = 2S*C
C = I - 2S^2
S = tmp
J >>= 1
end
S,C
end
"""
propagate_div_diff(::typeof(sin), sinH, J, H, τ)
Find the divided differences of `sin`; see
[`propagate_div_diff_sin_cos`](@ref).
"""
function propagate_div_diff(::typeof(sin), sinH, J, H, τ)
@assert ispow2(J)
propagate_div_diff_sin_cos(sinH, taylor_series(cos)(τ*H), J)[1][:,1]
end
"""
propagate_div_diff(::typeof(cos), cosH, J, H, τ)
Find the divided differences of `cos`; see
[`propagate_div_diff_sin_cos`](@ref).
"""
function propagate_div_diff(::typeof(cos), cosH, J, H, τ)
@assert ispow2(J)
propagate_div_diff_sin_cos(taylor_series(sin)(τ*H), cosH, J)[2][:,1]
end
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 6477 | mutable struct Lanczos{T,Op}
A::Op
Q::Matrix{T}
α::Vector{T}
β::Vector{T}
k::Int
end
function Lanczos(A, K; kwargs...)
T = real(eltype(A))
Q = zeros(T, size(A,1), K)
α = zeros(T, K)
β = zeros(T, K)
reset!(Lanczos(A, Q, α, β, 1); kwargs...)
end
function reset!(l::Lanczos{T}; v=nothing) where T
if isnothing(v)
v = rand(T, size(l.A,1))
end
l.Q[:,1] = normalize(v)
l.k = 1
l
end
function step!(l::Lanczos; verbosity=0)
@unpack A,Q,α,β,k = l
k == size(Q,2) && return
v = view(Q, :, k)
w = view(Q, :, k+1)
mul!(w, A, v)
α[k] = dot(v, w)
w .-= α[k] .* v
if k > 1
w .-= β[k-1] .* view(Q, :, k-1)
end
β[k] = norm(w)
w ./= β[k]
verbosity > 0 && printfmtln("iter {1:d}, α[{1:d}] {2:e}, β[{1:d}] {3:e}", k, α[k], β[k])
l.k += 1
end
LinearAlgebra.SymTridiagonal(L::Lanczos) =
SymTridiagonal(view(L.α, 1:L.k-1), view(L.β, 1:L.k-2))
"""
hermitian_spectral_range(A;[ K=20])
Estimate the spectral range of a Hermitian operator `A` using Algorithm 1 of
- Zhou, Y., & Li, R. (2011). Bounding the spectrum of large hermitian
matrices. Linear Algebra and its Applications, 435(3),
480–493. http://dx.doi.org/10.1016/j.laa.2010.06.034
"""
function hermitian_spectral_range(A; K=min(20,size(A,1)-1), ctol=√(eps(real(eltype(A)))),
verbosity=0, kwargs...)
verbosity > 0 &&
@info "Trying to estimate spectral interval for allegedly Hermitian operator"
l = Lanczos(A, K+1; kwargs...)
K̃ = min(4,K-1)
for k = 1:K̃
step!(l; verbosity=verbosity-1)
end
U = real(eltype(A))
βₖ = zero(U)
λₘᵢₙₖ = zero(U)
λₘₐₓₖ = zero(U)
zₘᵢₙₖ = zero(U)
zₘₐₓₖ = zero(U)
for k = K̃+1:K
step!(l; verbosity=verbosity-1)
ee = eigen(SymTridiagonal(l))
Z = ee.vectors
βₖ = l.β[k]
zₘᵢₙₖ = abs(Z[k,1])
zₘₐₓₖ = abs(Z[k,k])
λₘᵢₙₖ = ee.values[1]
λₘₐₓₖ = ee.values[k]
verbosity > 0 && printfmtln("k = {1:3d} β[k] = {2:e} λₘᵢₙ(Tₖ) = {3:+e} λₘₐₓ(Tₖ) = {4:+e} zₘᵢₙ[k]β[k] = {5:e} zₘₐₓ[k]β[k] = {6:e}",
k, βₖ, λₘᵢₙₖ, λₘₐₓₖ, zₘᵢₙₖ*βₖ, zₘₐₓₖ*βₖ)
if zₘₐₓₖ*βₖ < ctol
# Zhou et al. (2011), bound (2.8)
return Line(λₘᵢₙₖ - min(zₘᵢₙₖ, abs(Z[k,2]), abs(Z[k,3]))*βₖ,
λₘₐₓₖ + max(zₘₐₓₖ, abs(Z[k,k-1]), abs(Z[k,k-2]))*βₖ)
end
end
# Zhou et al. (2011), mean of bounds (2.5,6)
Line(λₘᵢₙₖ - (1+zₘᵢₙₖ)*βₖ/2,
λₘₐₓₖ + (1+zₘₐₓₖ)*βₖ/2)
end
"""
spectral_range(A[; ctol=√ε, verbosity=0])
Estimate the spectral range of the matrix/linear operator `A` using
[ArnoldiMethod.jl](https://github.com/haampie/ArnoldiMethod.jl). If
the spectral range along the real/imaginary axis is smaller than
`ctol`, it is compressed into a line. Returns a spectral
[`Shape`](@ref).
# Examples
```julia-repl
julia> A = Diagonal(1.0:6)
6×6 Diagonal{Float64,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}}:
1.0 ⋅ ⋅ ⋅ ⋅ ⋅
⋅ 2.0 ⋅ ⋅ ⋅ ⋅
⋅ ⋅ 3.0 ⋅ ⋅ ⋅
⋅ ⋅ ⋅ 4.0 ⋅ ⋅
⋅ ⋅ ⋅ ⋅ 5.0 ⋅
⋅ ⋅ ⋅ ⋅ ⋅ 6.0
julia> MatrixPolynomials.spectral_range(A, verbosity=2)
Converged: 1 of 1 eigenvalues in 6 matrix-vector products
Converged: 1 of 1 eigenvalues in 6 matrix-vector products
Converged: 1 of 1 eigenvalues in 6 matrix-vector products
Converged: 1 of 1 eigenvalues in 6 matrix-vector products
[ Info: Imaginary extent of spectral range 0.0 below tolerance 1.4901161193847656e-8, conflating.
[ Info: 0.0 below tolerance 1.4901161193847656e-8, truncating.
[ Info: 0.0 below tolerance 1.4901161193847656e-8, truncating.
MatrixPolynomials.Line{Float64}(0.9999999999999998 + 0.0im, 6.0 + 0.0im)
julia> MatrixPolynomials.spectral_range(exp(im*π/4)*A, verbosity=2)
Converged: 1 of 1 eigenvalues in 6 matrix-vector products
Converged: 1 of 1 eigenvalues in 6 matrix-vector products
Converged: 1 of 1 eigenvalues in 6 matrix-vector products
Converged: 1 of 1 eigenvalues in 6 matrix-vector products
MatrixPolynomials.Rectangle{Float64}(0.7071067811865468 + 0.7071067811865478im, 4.242640687119288 + 4.242640687119283im)
```
The second example should also be a [`Line`](@ref), but the algorithm
is not yet clever enough.
"""
function spectral_range(A; ctol=√(eps(real(eltype(A)))), ishermitian=false, verbosity=0, kwargs...)
ishermitian && return hermitian_spectral_range(A; ctol=ctol, verbosity=verbosity, kwargs...)
r = map([(SR(),real),(SI(),imag),(LR(),real),(LI(),imag)]) do (which,comp)
schurQR,history = partialschur(A; which=which, nev = 1, kwargs...)
verbosity > 0 && println(history)
comp(first(schurQR.eigenvalues))
end
for (label,(i,j)) in [("Real", (1,3)),("Imaginary", (2,4))]
if abs(r[i]-r[j]) < ctol
verbosity > 1 && @info "$label extent of spectral range $(abs(r[i]-r[j])) below tolerance $(ctol), conflating."
r[i] = r[j] = (r[i]+r[j])/2
end
end
for i = 1:4
if abs(r[i]) < ctol
verbosity > 1 && @info "$(abs(r[i])) below tolerance $(ctol), truncating."
r[i] = zero(r[i])
end
end
a,b = (r[1]+im*r[2], r[3]+im*r[4])
if real(a) == real(b) || imag(a) == imag(b)
Line(a,b)
else
# There could be cases where all the eigenvalues fall on a
# sloped line in the complex plane, but we don't know how to
# deduce that yet. The user is free to define such sloped
# lines manually, though.
Rectangle(a,b)
end
end
function spectral_range(A::SymTridiagonal{<:Real}; kwargs...)
n = size(A,1)
a,b = minmax(first(eigvals(A, 1:1)),
first(eigvals(A, n:n)))
Line(a,b)
end
spectral_range(A::Diagonal{<:Real}; kwargs...) =
Line(minimum(A.diag), maximum(A.diag))
function spectral_range(A::Diagonal{<:Complex}; kwargs...)
d = A.diag
rd = real(d)
id = imag(d)
a = minimum(rd) + im*minimum(id)
b = maximum(rd) + im*maximum(id)
if real(a) == real(b) || imag(a) == imag(b)
Line(a,b)
else
Rectangle(a, b)
end
end
"""
spectral_range(t, A)
Finds the spectral range of `t*A`; if `t` is a vector, find the
largest such range.
"""
function spectral_range(t, A; kwargs...)
λ = spectral_range(A; kwargs...)
ta,tb = extrema(t)
ta*λ ∪ tb*λ
end
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 3984 | """
Shape
Abstract base type for different shapes in the complex plane
encircling the spectra of linear operators.
"""
abstract type Shape{T} end
# * Line
"""
Line(a, b)
For spectra falling on a line in the complex plane from `a` to `b`.
"""
struct Line{T} <: Shape{T}
a::Complex{T}
b::Complex{T}
function Line(a::A, b::B) where {A,B}
T = real(promote_type(A,B))
new{T}(Complex{T}(a), Complex{T}(b))
end
end
"""
n * l::Line
Scale the [`Line`](@ref) `l` by `n`.
"""
Base.:(*)(n::Number, l::Line) = Line(n*l.a, n*l.b)
Base.iszero(l::Line) = iszero(l.a) && iszero(l.b)
"""
range(l::Line, n)
Generate a uniform distribution of `n` values along the [`Line`](@ref)
`l`. If the line is on the real axis, the resulting values will be
real as well.
# Examples
```julia-repl
julia> range(MatrixPolynomials.Line(0, 1.0im), 5)
5-element LinRange{Complex{Float64}}:
0.0+0.0im,0.0+0.25im,0.0+0.5im,0.0+0.75im,0.0+1.0im
julia> range(MatrixPolynomials.Line(0, 1.0), 5)
0.0:0.25:1.0
```
"""
function Base.range(l::Line, n)
a,b = if isreal(l.a) && isreal(l.b)
real(l.a), real(l.b)
else
l.a, l.b
end
range(a,stop=b,length=n)
end
"""
mean(l::Line)
Return the mean value along the [`Line`](@ref) `l`.
"""
function Statistics.mean(l::Line)
μ = (l.a + l.b)/2
isreal(μ) ? real(μ) : μ
end
function LinearAlgebra.normalize(z::Number)
N = norm(z)
iszero(N) ? z : z/N
end
"""
a::Line ∪ b::Line
Form the union of the [`Line`](@ref)s `a` and `b`, which need to be
collinear.
"""
function Base.union(a::Line, b::Line)
a == b && return a
da = normalize(a.b - a.a)
db = normalize(b.b - b.a)
# Every line is considered "parallel" with the origin
if !iszero(a) && !iszero(b)
cosθ = da'db
cosθ ≈ 1 || cosθ ≈ -1 ||
throw(ArgumentError("Lines $a and $b not parallel"))
end
# If the lengths of both lines are zero, then they are "collinear"
# by definition. Otherwise, the points of one line have to lie on
# the extension of the other line.
if !iszero(da) || !iszero(db)
t = (b.a - a.a)/(iszero(da) ? db : da)
isapprox(imag(t), zero(t), atol=eps(real(t))) ||
throw(ArgumentError("Lines $a and $b not collinear"))
end
ra,rb = extrema([real(a.a),real(a.b),real(b.a),real(b.b)])
ia,ib = extrema([imag(a.a),imag(a.b),imag(b.a),imag(b.b)])
Line(ra+im*ia, rb+im*ib)
end
# * Rectangle
"""
Rectangle(a,b)
For spectra falling within a rectangle in the complex plane with
corners `a` and `b`.
"""
struct Rectangle{T} <: Shape{T}
a::Complex{T}
b::Complex{T}
function Rectangle(a::A, b::B) where {A,B}
T = real(promote_type(A,B))
new{T}(Complex{T}(a), Complex{T}(b))
end
end
"""
n * r::Rectangle
Scale the [`Rectangle`](@ref) `r` by `n`.
"""
Base.:(*)(n::Number, r::Rectangle) = Rectangle(n*r.a, n*r.b)
"""
range(r::Rectangle, n)
Generate a uniform distribution of `n` values along the diagonal of
the [`Rectangle`](@ref) `r`.
This assumes that the eigenvalues lie on the diagonal of the
rectangle, i.e. that the spread is negligible. It would be more
correct to instead generate samples along the sides of the rectangle,
however, [`spectral_range`](@ref) needs to be modified to correctly
identify spectral ranges falling on a line that is not lying on the
real or imaginary axis.
"""
Base.range(r::Rectangle, n) = range(r.a,stop=r.b,length=n)
"""
mean(r::Rectangle)
Return the middle value of the [`Rectangle`](@ref) `r`.
"""
function Statistics.mean(r::Rectangle)
μ = (r.a + r.b)/2
isreal(μ) ? real(μ) : μ
end
"""
a::Rectangle ∪ b::Rectangle
Find the smalling [`Rectangle`](@ref) encompassing `a` and `b`.
"""
function Base.union(a::Rectangle, b::Rectangle)
ra,rb = extrema([real(a.a),real(a.b),real(b.a),real(b.b)])
ia,ib = extrema([imag(a.a),imag(a.b),imag(b.a),imag(b.b)])
Rectangle(ra+im*ia, rb+im*ib)
end
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 2395 | @doc raw"""
TaylorSeries(d, c)
Represents the Taylor series of a function as
```math
f(x) = \sum_{k=0}^\infty c_k x^{d_k},
```
where `dₖ = d(k)` and `cₖ = c(k)`.
"""
struct TaylorSeries{D,C}
d::D
c::C
end
function Base.show(io::IO, ts::TaylorSeries)
for k = 0:3
d = ts.d(k)
c = ts.c(k)
k > 0 && write(io, " ")
write(io, c < 0 ? "- " : (k > 0 ? "+ " : ""))
!isone(abs(c)) && write(io, "$(abs(c))")
if d == 0
isone(abs(c)) && write(io, "1")
elseif d == 1
write(io, "x")
else
write(io, "x^$(d)")
end
end
write(io, " + ...")
end
function bisect_find_last(f, r::UnitRange{<:Integer})
f(r[1]) || return nothing
f(r[end]) && return r[end]
while length(r) > 2
i = div(length(r),2)
if isodd(i)
i = length(r) - i
end
fhalf = f(r[i])
r = fhalf ? (r[i]:r[end]) : (r[1]:r[i])
end
r[1]
end
"""
(ts)(x; max_degree=17)
Evaluate the Taylor series represented by `ts` up to a maximum degree
in `x` (default 17).
"""
function (ts::TaylorSeries)(x; max_degree=17)
kmax = bisect_find_last(k -> ts.d(k) ≤ max_degree, 0:max_degree)
v = zero(closure(x), size(x)...) + ts.c(kmax)*I
d_prev = ts.d(kmax)
# This is a special version of Horner's rule that takes into
# account that some powers may be absent from the Taylor
# polynomial.
for k = kmax-1:-1:0
d = ts.d(k)
for j = 1:(d_prev-d)
v *= x
end
d_prev = d
v += ts.c(k)*I
# Handle odd cases, e.g. sine
if k == 0 && d == 1
v *= x
end
end
v
end
macro taylor_series(f, d, c)
docstring = """
taylor_series(::typeof($f))
Generates the [`TaylorSeries`](@ref) of `$f(x) = ∑ₖ x^($d) $c`.
# Example
```julia-repl
julia> taylor_series($f)
"""
quote
@doc $docstring*string(TaylorSeries(k -> $d, k -> $c))*"\n```"
taylor_series(::typeof($f)) = TaylorSeries(k -> $d, k -> $c)
end |> esc
end
@taylor_series exp k 1/Γ(k+1)
@taylor_series sin 2k+1 (-1)^k/Γ(2k+2)
@taylor_series cos 2k (-1)^k/Γ(2k+1)
@taylor_series sinh 2k+1 1/Γ(2k+2)
@taylor_series cosh 2k 1/Γ(2k+1)
@taylor_series φ₁ k 1/Γ(k+2)
taylor_series(fix::Base.Fix1{typeof(φ),<:Integer}) = TaylorSeries(k -> k, k -> 1/Γ(k+fix.x+1))
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 2680 | @testset "Divided differences" begin
@testset "Infinitesimal divided differences" begin
# This will lead to catastrophic cancellation for the standard
# recursive formulation of divided differences; the goal is to
# ascertain the accuracy of the optimized methods, which
# should converge to the Taylor expansion of φₖ(z) for
# infinitesimal |z|.
@testset "$label" for (label,μ) in [("Real", 1.0),
("Imaginary", 1.0im),
("Complex", exp(im*π/4))]
m = 100
ξ = μ*eps(Float64)*range(-1,stop=1,length=m)
x = μ*eps(Float64)*range(-1,stop=1,length=1000)
@testset "k = $k" for k = 0:6
f = φ(k)
f_exact = f.(x)
taylor_expansion = vcat(1.0 ./ [Γ(k+n+1) for n = 0:m-1])
@testset "$method" for (method, func) in [
("Taylor series", (k,ξ) -> ts_div_diff_table(f, collect(ξ), 1, 0, 1)),
("Basis change", (k,ξ) -> φₖ_div_diff_basis_change(k, ξ)),
("Auto", (k,ξ) -> ⏃(f, collect(ξ), 1, 0, 1))
]
d = func(k, ξ)
@test d ≈ taylor_expansion atol=1e-15
# The Taylor expansion coefficients for φₖ(z)
# should all be real, to machine precision, even
# for complex z.
@test norm(imag(d)) ≈ 0 atol=1e-15
f_d = NewtonPolynomial(ξ, d).(x)
@test f_d ≈ f_exact atol=1e-14
end
end
end
end
@testset "Finite divided differences" begin
@testset "$label" for (label,μ) in [("Real", 1.0),
("Imaginary", 1.0im),
("Complex", exp(im*π/4))]
m = 100
dx = 1.0
ξ = μ*dx*range(-1,stop=1,length=m)
x = μ*dx*range(-1,stop=1,length=1000)
@testset "k = $k" for k = 0:6
f = φ(k)
f_exact = f.(x)
@testset "$method" for (method, func) in [
("Taylor series", (k,ξ) -> ts_div_diff_table(f, collect(ξ), 1, 0, 1)),
("Basis change", (k,ξ) -> φₖ_div_diff_basis_change(k, ξ)),
("Auto", (k,ξ) -> ⏃(f, collect(ξ), 1, 0, 1))
]
d = func(k, ξ)
f_d = NewtonPolynomial(ξ, d).(x)
@test f_d ≈ f_exact atol=1e-12
end
end
end
end
end
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 1959 | function test_stepping(f, x, m, μ, x̃, h̃; kwargs...)
@info "Reference solution"
ỹ = @time reduce(hcat, h̃.(x̃))
n = size(ỹ,1)
t = μ*step(x̃)
f̂ = FuncV(f, x, m, t; kwargs...)
y = similar(ỹ)
y[:,1] = ỹ[:,1]
@info "Leja/Newton solution"
@time for i = 2:length(x̃)
mul!(view(y,:,i), f̂, view(y,:,i-1))
end
global_error = abs.(y-ỹ)
# This is only a rough estimate of the local error
local_error = vcat(0, abs.(diff(global_error, dims=2)))
@info "Maximum global error: $(maximum(global_error))"
@info "Maximum local error: $(maximum(local_error))"
global_error, local_error
end
function tdse(N, ρ, ℓ)
j = 1:N
r = (j .- 1/2)*ρ
j² = j.^2
α = (j²./(j² .- 1/4))[1:end-1]
β = (j² - j .+ 1/2)./(j² - j .+ 1/4)
# T = Tridiagonal(α, -2β, α)/(-2ρ^2)
T = SymTridiagonal(-2β, α)/(-2ρ^2)
V = Diagonal(-1 ./ r + ℓ*(ℓ + 1) ./ 2r.^2)
ψ₀ = exp.(-r.^2)
lmul!(1/√ρ, normalize!(ψ₀))
T,V,ψ₀
end
@testset "FuncV" begin
@testset "TDSE" begin
N = 7
ρ = 0.1
L = 1
tmax = 1.0
t = range(0, stop=tmax, length=1000)
T,V,ψ₀ = tdse(N, ρ, L)
H = T+V
B = -im*H
# Exact solution
F̃ = t -> exp(t*Matrix(B))*ψ₀
m = 40 # Number of Leja points
@testset "Tolerance = $tol" for (tol,exp_error) in [(3e-14,7e-12),
(1e-12,1e-9)]
@testset "Scaling and shifting: $(scale_and_shift)" for scale_and_shift=[true,false]
global_error, local_error = test_stepping(exp, H, m, -im, t, F̃,
tol=tol, scale_and_shift=scale_and_shift)
@test all(global_error .≤ exp_error)
@test all(local_error .≤ 10tol)
# Should test number of Leja points used
end
end
end
end
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 988 | @testset "Leja points" begin
@testset "$llabel Leja points" for (llabel,LejaType,extra_func!) = [("Discretized", (a,b,n)->Leja(range(a,stop=b,length=1001),n), leja!),
("Fast", (a,b,n)->FastLeja(a,b,n), fast_leja!)]
@testset "$label Leja points" for (label,comp,factor) = [("Real", real, 1.0),
("Imaginary", imag, 1.0im)]
a = -2*factor
b = 2*factor
n = 100
l = LejaType(a, b, n)
ζ = points(l)
@test abs(ζ[1]) == 2
@test ζ[2] == -ζ[1]
@test ζ[3] == 0
@test length(ζ) == n
@test allunique(ζ)
@test all(comp(a) .≤ comp(ζ) .≤ comp(b))
extra_func!(l, 300)
@test length(ζ) == 300
@test allunique(ζ)
@test all(comp(a) .≤ comp(ζ) .≤ comp(b))
end
end
end
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 3905 | function test_scalar_newton_leja(f, x, m, x̃, h, h̃)
ξ = points(Leja(x, m))
@info "$m $(eltype(ξ)) Leja points"
np = NewtonPolynomial(f, ξ)
y = h.(x̃,Ref(np))
ỹ = h̃.(x̃)
ms = 1:m
errors = zeros(m)
error_estimates = zeros(m,1)
for m = ms
np′ = view(np, 1:m)
p = x -> begin
val,err = np′(x, true)
error_estimates[m,1] = max(error_estimates[m,1], err)
val
end
y′ = similar(y)
for i = eachindex(x̃)
y′[i] = h(x̃[i],p)
end
errors[m] = norm(y′ - h̃.(x̃))
end
norm(y-ỹ),errors,error_estimates
end
function test_mat_newton_leja(f, x, m, x̃, h!, h̃)
ξ = points(Leja(x, m))
@info "$m $(eltype(ξ)) Leja points"
np = NewtonPolynomial(f, ξ)
@info "Reference solution"
ỹ = @time reduce(hcat, h̃.(x̃))
n = size(ỹ, 1)
nmp = NewtonMatrixPolynomial(np, n)
y = similar(ỹ)
hh! = h!(nmp)
@info "Leja/Newton solution"
@time for i = eachindex(x̃)
hh!(view(y,:,i), x̃[i])
end
ms = 1:m
errors = zeros(m)
for m = ms
np′ = view(np, 1:m)
nmp′ = NewtonMatrixPolynomial(np′, n)
hh! = h!(nmp′)
y′ = similar(y)
for i = eachindex(x̃)
hh!(view(y′,:,i), x̃[i])
end
errors[m] = norm(y′ - ỹ)
end
norm(y-ỹ),errors
end
@testset "Newton polynomials" begin
@testset "Scalar polynomials" begin
dx = 2
x = dx*range(-1,stop=1,length=1000)
@testset "Quadratic function" begin
ξ = points(Leja(x, 10))
f = x -> x^2
d = std_div_diff(f, ξ, 1, 0, 1)
np = NewtonPolynomial(ξ, d)
@test np.d[1:3] ≈ [4,0,1]
@test all(d -> isapprox(d, 0, atol=√(eps(d))), np.d[4:end])
end
@testset "Exponential function" begin
Δy,errors,error_estimates = test_scalar_newton_leja(exp, x, 20, x, (t, p) -> p(t), exp)
@test Δy < 7e-14
@test all(errors[end-2:end] .< 1e-13)
end
@testset "Inhomogeneous ODE, $kind" for (kind,m,tol) in [(:real,43,1e-13), (:complex,60,5e-13)]
y₀ = 1.0
g = -3.0
tmax = 10.0
b,tmin = if kind == :real
-2, 0
else
-2im, -tmax
end
t = range(tmin, stop=tmax, length=1000)
h = (t, p) -> y₀ + t*p(t*b)*(b*y₀ + g)
h̃ = t -> exp(t*b)*(y₀ + g/b) - g/b
Δy,errors,error_estimates = test_scalar_newton_leja(φ₁, b*t, m, t, h, h̃)
@test Δy < tol
@test all(errors[end-12:end] .< 3tol)
# Should also look at error estimates
end
end
@testset "Matrix polynomials" begin
@testset "Inhomogeneous coupled ODEs, $kind" for (kind,m,tol) in [(:real,43,1e-12), (:complex,60,1e-12)]
n = 10 # Number of ODEs
Y₀ = 1.0*ones(kind == :real ? Float64 : ComplexF64, n)
G = -3*ones(n) # Inhomogeneous terms
n_discr = 1000 # Number of points spanning eigenspectrum interval
m = 60 # Number of Leja points
tmax = 10.0
b,c,tmin = if kind == :real
-2, 0.2, 0
else
-2im, 0.2im, -tmax
end
Bdiag = Diagonal(b./(1:n))
o = ones(n)
B = Bdiag + Tridiagonal(c*o[2:end], 0c*o, c*o[2:end])
t = range(tmin, stop=tmax, length=1000)
@show λ = spectral_range(t, B, verbosity=2)
H! = function(p)
(w,t) -> BLAS.axpy!(1, Y₀, lmul!(t, mul!(w, p, t*B, B*Y₀ + G)))
end
H̃ = t -> exp(t*Matrix(B))*(Y₀ + B\G) - B\G
Δy,errors = test_mat_newton_leja(φ₁, range(λ, n_discr), m, t, H!, H̃)
@test errors[end] < 5e-12
end
end
end
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 417 | using MatrixPolynomials
using Test
using LinearAlgebra
import MatrixPolynomials: Line, Rectangle, spectral_range,
Leja, leja!, FastLeja, fast_leja!, points,
φ₁, φ, Γ,
std_div_diff, ts_div_diff_table, φₖ_div_diff_basis_change, ⏃,
NewtonPolynomial, NewtonMatrixPolynomial,
FuncV
include("spectral.jl")
include("leja.jl")
include("divided_differences.jl")
include("newton.jl")
include("funcv.jl")
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | code | 1492 | @testset "Spectral shapes" begin
@testset "Lines" begin
l = Line(0.1, 1.0)
@test 2l == Line(0.2, 2.0)
@test range(l, 11) == range(0.1, stop=1.0, length=11)
@test range(im*l, 11) == range(0.1im, stop=1.0im, length=11)
@test l ∪ l == l
@test l ∪ Line(0.0, 0.9) == Line(0.0, 1.0)
@test l ∪ Line(0.0, 0.0) == Line(0.0, 1.0)
@test Line(0.5(1+im),1+im) ∪ Line(0.0, 0.0) == Line(0.0, 1+im)
@test Line(0.1+im, 0.1+im) ∪ Line(0.0,0.0) == Line(0,0.1+im)
@test_throws ArgumentError l ∪ Line(0.0, 1+im)
@test_throws ArgumentError l ∪ Line(0.1+im, 1+im)
@test_throws ArgumentError Line(0.1+im, 1+im) ∪ Line(0.0,0.0)
end
@testset "Rectangles" begin
r = Rectangle(0.0, 1.0+im)
@test 0.5r == Rectangle(0.0, 0.5+0.5im)
@test range(r, 11) == range(0.0, stop=1.0+im, length=11)
@test r ∪ Rectangle(0.5*(1+im), 1.5*(1+im)) == Rectangle(0, 1.5*(1+im))
end
end
@testset "Spectral ranges" begin
A = Diagonal([1.0, 2])
λ = spectral_range(A)
@test λ isa Line
@test λ.a ≈ 1.0
@test λ.b ≈ 2.0
λim = spectral_range(-im*A)
@test λim isa Line
@test λim.a ≈ -2.0im
@test λim.b ≈ -1.0im
λcomp = spectral_range(exp(-im*π/4)*A)
@test λcomp isa Rectangle
@test λcomp.a ≈ √2*(0.5 - im)
@test λcomp.b ≈ √2*(1 - 0.5im)
λt = spectral_range(-1:0.1:1, A)
@test λt isa Line
@test λt.a ≈ -2.0
@test λt.b ≈ 2.0
end
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | docs | 2641 | # MatrixPolynomials.jl
[](https://jagot.github.io/MatrixPolynomials.jl/stable)
[](https://jagot.github.io/MatrixPolynomials.jl/dev)
[](https://github.com/jagot/MatrixPolynomials.jl/actions)
[](https://codecov.io/gh/jagot/MatrixPolynomials.jl)
This package aids in the computation of the action of a matrix
polynomial on a vector, i.e. `p(A)v`, where `A` is a (square) matrix
(or a linear operator) that is supplied to the polynomial `p`. The
matrix polynomial `p(A)` is never formed explicitly, instead only its
action on `v` is evaluated. This is commonly used in time-stepping
algorithms for ordinary differential equations (ODEs) and discretized
partial differential equations (PDEs) where `p` is an approximation of
the exponential function (or the related `φ` functions:
`φ₀(z) = exp(z)`, `φₖ₊₁ = [φₖ(z)-φₖ(0)]/z`, `φₖ(0)=1/k!`) on the
field-of-values of the matrix `A`, which for the methods in this
package needs to be known before-hand.
## Alternatives
Other packages with similar goals, but instead based on matrix
polynomials found via Krylov iterations are
- https://github.com/JuliaDiffEq/ExponentialUtilities.jl
- https://github.com/Jutho/KrylovKit.jl
Krylov iterations do not need to know the field-of-values of the
matrix `A` before-hand, instead, an orthogonal basis is built-up
on-the-fly, by repeated action of `A` on test vectors: `Aⁿ*v`. This
process is however very sensitive to the condition number of `A`,
something that can be alleviated by iterating a shifted and inverted
matrix instead: `(A-σI)⁻¹` (rational Krylov). Not all matrices/linear
operators are easily inverted/factorized, however.
Moreover, the Krylov iterations for general matrices (then called
Arnoldi iterations) require long-term recurrences with mutual
orthogonalization along with inner products, all of which can be
costly to compute. Finally, a subspace approximation of the polynomial
`p` of a upper Hessenberg matrix needs to computed. The
real-symmetric/complex-Hermitian case (Lanczos iterations) reduces to
three-term recurrences and a tridiagonal subspace matrix. In contrast,
the polynomial methods of this packages two-term recurrences only, no
orthogonalization (and hence no inner products), and finally no
evaluation of the polynomial on a subspace matrix. This could
potentially mean that the methods are easier to implement on a GPU.
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | docs | 8303 | # Divided differences
The divided differences of a function ``f`` with respect to a set of
interpolation points ``\{\zeta_i\}`` is defined as [^McCurdy]
```math
\begin{equation}
\label{eqn:div-diff-def}
\divdiff(\zeta_{i:j})f \defd
\frac{1}{2\pi\im}
\oint
\diff{z}
\frac{f(z)}{(z-\zeta_i)(z-\zeta_{i+1})...(z-\zeta_j)},
\end{equation}
```
where the integral is taken along a simple contour encircling the
poles once. A common approach to evaluate the divided differences of
``f``, and an alternative definition, is the recursive scheme
```math
\begin{equation}
\label{eqn:div-diff-recursive}
\tag{\ref{eqn:div-diff-def}*}
\divdiff(\zeta_{i:j},z)f \defd
\frac{\divdiff(\zeta_{i:j-1},z)f-\divdiff(\zeta_{i:j})f}{z - \zeta_j}, \quad
\divdiff(\zeta_i,z)f \defd
\frac{\divdiff(z)f-\divdiff(\zeta_i)f}{z - \zeta_i}, \quad
\divdiff(z)f \defd f(z),
\end{equation}
```
which, however, is prone to catastrophic cancellation for very small
``\abs{\zeta_i-\zeta_j}``. This can be partially alleviated by
employing `BigFloat`s, but that will only postpone the breakdown,
albeit with ~40 orders of magnitude, which might be enough for
practical purposes (but much slower).
[`MatrixPolynomials.ts_div_diff_table`](@ref) is based upon
the fact the divided differences in a third way can be computed as
[^McCurdy][^Opitz]
```math
\begin{equation}
\label{eqn:div-diff-mat-fun}
\tag{\ref{eqn:div-diff-def}†}
\divdiff(\zeta_{i:j})f \defd
\vec{e}_1^\top
f(\mat{Z}_{i:j}),
\end{equation}
```
i.e. the first row of the function ``f`` applied to the matrix
```math
\begin{equation}
\mat{Z}_{i:j}\defd
\bmat{
\zeta_i&1&\\
&\zeta_{i+1}&1\\
&&\ddots&\ddots\\
&&&\ddots&1\\
&&&&\zeta_j}.
\end{equation}
```
The right-eigenvectors are given by [^Opitz]
```math
\begin{equation}
\label{eqn:div-diff-mat-right-eigen}
\mat{Q}_\zeta = \{q_{ik}\}, \quad
q_{ik} =
\begin{cases}
\prod_{j=i}^{k-1} (\zeta_k - \zeta_j)^{-1}, & i < k,\\
1, & i = k,\\
0, & \textrm{else},
\end{cases}
\end{equation}
```
and similarly, the left-eigenvectors are given by
```math
\begin{equation}
\label{eqn:div-diff-mat-left-eigen}
\tag{\ref{eqn:div-diff-mat-right-eigen}*}
\mat{Q}_\zeta^{-1} = \{\conj{q}_{ik}\}, \quad
\conj{q}_{ik} =
\begin{cases}
\prod_{j=i+1}^k (\zeta_i - \zeta_j)^{-1}, & i < k,\\
1, & i = k,\\
0, & \textrm{else},
\end{cases}
\end{equation}
```
such that
```math
\begin{equation}
\divdiff(\zeta_{i:j})f=
\mat{Q}_\zeta\mat{F}_\zeta\mat{Q}_\zeta^{-1},\quad
\mat{F}_\zeta \defd \bmat{f(\zeta_i)\\&f(\zeta_{i+1})\\&&\ddots\\&&&f(\zeta_j)}.
\end{equation}
```
However, straight evaluation of
``(\ref{eqn:div-diff-mat-right-eigen},\ref{eqn:div-diff-mat-left-eigen})``
is prone to the same kind of catastrophic cancellation as is
``\eqref{eqn:div-diff-recursive}``, so to evaluate
``\eqref{eqn:div-diff-mat-fun}``, one instead turns to Taylor or
[Padé](https://en.wikipedia.org/wiki/Pad%C3%A9_approximant) expansions
of ``f(\mat{Z}_{i:j})`` [^McCurdy][^Caliari], or interpolation
polynomial basis changes [^Zivcovich].
As an illustration, we show the divided differences of `exp` over 100
points uniformly spread over ``[-2,2]``, calculated using
``\eqref{eqn:div-diff-recursive}``, in `Float64` and `BigFloat`
precision, along with a Taylor expansion of
``\eqref{eqn:div-diff-mat-fun}``:

It can clearly be seen that the Taylor expansion is not susceptible to
the catastrophic cancellation.
Thanks to the general implementation of divided differences using
Taylor expansions of the desired function, it is very easy to generate
[Newton polynomials](@ref) approximating the function on an interval:
```julia-repl
julia> import MatrixPolynomials: Leja, points, NewtonPolynomial, ⏃
julia> μ = 10.0 # Extent of interval
10.0
julia> m = 40 # Number of Leja points
40
julia> ζ = points(Leja(μ*range(-1,stop=1,length=1000),m))
40-element Array{Float64,1}:
10.0
-10.0
-0.01001001001001001
5.7757757757757755
-6.596596596596597
8.398398398398399
-8.6986986986987
-3.053053053053053
3.2132132132132134
9.43943943943944
-9.51951951951952
-4.794794794794795
7.137137137137137
1.5515515515515514
-7.757757757757758
9.7997997997998
-1.6116116116116117
-9.83983983983984
4.614614614614615
8.91891891891892
-5.7157157157157155
2.3723723723723724
-9.11911911911912
7.757757757757758
-3.873873873873874
6.416416416416417
-8.218218218218219
9.91991991991992
-0.8108108108108109
-9.93993993993994
3.973973973973974
-7.137137137137137
9.1991991991992
-2.3523523523523524
0.8108108108108109
-9.67967967967968
9.63963963963964
5.235235235235235
-5.275275275275275
8.078078078078079
julia> d = ⏃(sin, ζ, 1, 0, 1)
40-element Array{Float64,1}:
-0.5440211108893093
-0.05440211108893093
0.00010554419095304635
0.00042707706157334835
0.00017816519362596795
-0.00015774261733182256
-3.046393737965622e-6
-1.7726427136510242e-6
-1.2091185654301347e-7
8.298167162094031e-8
1.623156704750302e-9
-2.1182984780033414e-9
3.072198477098241e-11
2.690974958064657e-11
7.708729505182354e-13
-1.385345395017015e-13
2.081712029555509e-15
6.103669805230243e-16
4.2232933731665444e-18
-2.098152059762693e-18
7.153277579328475e-21
6.390881616124369e-21
7.322223484376659e-23
-1.3419887223602703e-23
-4.050939196813086e-26
2.4794777140850798e-26
1.268544482329477e-28
-3.581342740292682e-29
2.7876085130074983e-31
4.786776652095869e-32
8.943705105911237e-36
-5.432439158165548e-35
9.88206793819289e-38
5.559232062626121e-38
-1.2016071877913981e-41
-4.710497689585078e-41
7.660823607389171e-45
3.728816926131357e-44
-4.378275580359998e-48
-2.577149389756008e-47
julia> np = NewtonPolynomial(ζ, d)
Newton polynomial of degree 39 on -10.0..10.0
julia> x = range(-μ, stop=μ, length=1000)
-10.0:0.02002002002002002:10.0
julia> f_np = np.(x);
julia> f_exact = sin.(x);
```
Behind the scenes, [`MatrixPolynomials.taylor_series`](@ref) is used
to generate the Taylor expansion of ``\sin(x)``, and when an
approximation of ``\sin(\tau \mat{Z})`` has been computed, the full
divided difference table ``\sin(\mat{Z})`` is recovered using
[`MatrixPolynomials.propagate_div_diff`](@ref).

## Reference
```@docs
MatrixPolynomials.⏃
MatrixPolynomials.std_div_diff
MatrixPolynomials.ts_div_diff_table
MatrixPolynomials.φₖ_div_diff_basis_change
MatrixPolynomials.div_diff_table_basis_change
MatrixPolynomials.min_degree
```
### Taylor series
```@docs
MatrixPolynomials.TaylorSeries
MatrixPolynomials.taylor_series
MatrixPolynomials.closure
```
### Scaling
For the computation of ``\exp(A)``, a common approach when ``|A|`` is
large is to compute ``[\exp(A/s)]^s`` instead. This is known as
_scaling and squaring_, if ``s`` is selected to be a
power-of-two. Similar relationships can be found for other functions
and are implemented for some using
[`MatrixPolynomials.propagate_div_diff`](@ref).
```@docs
MatrixPolynomials.propagate_div_diff
MatrixPolynomials.propagate_div_diff_sin_cos
```
## Bibliography
[^Caliari]: Caliari, M. (2007). Accurate evaluation of divided
differences for polynomial interpolation of exponential
propagators. Computing, 80(2), 189–201. [DOI:
10.1007/s00607-007-0227-1](http://dx.doi.org/10.1007/s00607-007-0227-1)
[^McCurdy]: McCurdy, A. C., Ng, K. C., & Parlett,
B. N. (1984). Accurate computation of divided differences of the
exponential function. Mathematics of Computation, 43(168),
501–501. [DOI:
10.1090/s0025-5718-1984-0758198-0](http://dx.doi.org/10.1090/s0025-5718-1984-0758198-0)
[^Opitz]: Opitz, G. (1964). Steigungsmatrizen. ZAMM - Journal of
Applied Mathematics and Mechanics / Zeitschrift für Angewandte
Mathematik und Mechanik, 44(S1), [DOI:
10.1002/zamm.19640441321](http://dx.doi.org/10.1002/zamm.19640441321)
[^Zivcovich]: Zivcovich, F. (2019). Fast and accurate computation of
divided differences for analytic functions, with an application to
the exponential function. Dolomites Research Notes on
Approximation, 12(1), 28–42. [PDF:
Zivcovich_2019_FAC.pdf](https://drna.padovauniversitypress.it/system/files/papers/Zivcovich_2019_FAC.pdf)
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | docs | 1260 | # Functions of matrices
```@docs
MatrixPolynomials.FuncV
MatrixPolynomials.FuncV(f::Function, A, m::Integer, t=one(eltype(A)); distribution=:leja, leja_multiplier=100, tol=1e-15, kwargs...)
LinearAlgebra.mul!(w, funcv::MatrixPolynomials.FuncV, v)
```
## Spectral ranges and shapes
```@docs
MatrixPolynomials.spectral_range
MatrixPolynomials.hermitian_spectral_range
```
### Shapes
The shapes in the complex plane are mainly used to generate suitable
distributions of [Leja points](@ref), which are in turn used to
generate [Newton polynomials](@ref) that efficiently approximate
various functions on the field-of-values of a matrix ``\mat{A}`` which
is contained within the spectral shape.
```@docs
MatrixPolynomials.Shape
```
#### Lines
```@docs
MatrixPolynomials.Line
Base.:(*)(n::Number, l::MatrixPolynomials.Line)
Base.range(l::MatrixPolynomials.Line, n)
Statistics.mean(l::MatrixPolynomials.Line)
Base.union(a::MatrixPolynomials.Line, b::MatrixPolynomials.Line)
```
#### Rectangles
```@docs
MatrixPolynomials.Rectangle
Base.:(*)(n::Number, l::MatrixPolynomials.Rectangle)
Base.range(l::MatrixPolynomials.Rectangle, n)
Statistics.mean(l::MatrixPolynomials.Rectangle)
Base.union(a::MatrixPolynomials.Rectangle, b::MatrixPolynomials.Rectangle)
```
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | docs | 740 | # MatrixPolynomials.jl
The main purpose of this package is to provide polynomial
approximations to ``f(\mat{A})``, i.e. the function of a matrix
``\mat{A}`` for which the field-of-values ``W(\mat{A}) \subset
\Complex`` (or equivalently the distribution of eigenvalues) is known
_a priori_. If this is the case, a polynomial approximation ``p(z)
\approx f(z)`` for ``z \in W(\mat{A})`` can be constructed, and this
can subsequently be used, substituting ``\mat{A}`` for ``z``. This is
in contrast to Krylov-based methods, where the matrix polynomials are
generated on-the-fly, without any prior knowledge of ``W(\mat{A})``
(even though knowledge _can_ be used to speed up the convergence of
the Krylov iterations).
## Index
```@index
```
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | docs | 4288 | # Leja points
A common problem in polynomial interpolation of functions, is that
when the number of interpolation points is increased, the
interpolation polynomial becomes ill-conditioned
([overfitting](https://en.wikipedia.org/wiki/Overfitting)). It can be
shown that interpolation at the roots of the [Chebyshev polynomials
](https://en.wikipedia.org/wiki/Chebyshev_polynomials) yields the best
approximation, however, it is difficult to generate successively
better approximations, since the roots of the Chebyshev polynomial of
degree ``m`` are not related to those of the polynomial of degree
``m-1``.
The Leja points [^Leja] ``\{\zeta_i\}`` are generated from a set ``E
\subset \Complex`` such that the next point in the sequence is
maximally distant from all previously generated points:
```math
\begin{equation}
w(\zeta_j)
\prod_{k=0}^{j-1} \abs{\zeta_j-\zeta_k} =
\max_{\zeta\in E}
w(\zeta)
\prod_{k=0}^{j-1}
\abs{\zeta - \zeta_k},
\end{equation}
```
with ``w(\zeta)`` being an optional weight function (unity
hereinafter). Interpolating a function on the Leja points largely
avoids the overfitting problems and performs similarly to Chebyshev
interpolation [^Reichel], while still allowing
for iteratively improved approximation by the addition of more
interpolation points.
MatrixPolynomials.jl provides two methods for generating the Leja
points, [`MatrixPolynomials.Leja`](@ref) and
[`MatrixPolynomials.FastLeja`](@ref)[^Baglama]. The figure below
illustrates the distribution of Leja points using both methods, on the
line ``[-2,2]``, for the [`MatrixPolynomials.Leja`](@ref), an
underlying discretization of 1000 points was employed, and 10 Leja
points were generated. The lower part of the plot shows the estimation
of the [capacity](https://en.wikipedia.org/wiki/Capacity_of_a_set),
calculated as
```math
C(\{\zeta_{1:m}\}) \approx
\left|\left(\prod_{i=1}^{m-1} |\zeta_m-\zeta_i|\right)\right|^{1/m}.
```
For the set ``[-2,2]``, the capacity is unity, which is approached for
increasing values of ``m``.
```julia-repl
julia> import MatrixPolynomials: Leja, FastLeja
julia> m = 10
10
julia> a,b = -2,2
(-2, 2)
julia> l = Leja(range(a, stop=b, length=1000), m)
Leja{Float64}([-1.995995995995996, -1.991991991991992, -1.987987987987988, -1.983983983983984, -1.97997997997998, -1.975975975975976, -1.971971971971972, -1.967967967967968, -1.9639639639639639, -1.95995995995996 … 1.95995995995996, 1.9639639639639639, 1.967967967967968, 1.971971971971972, 1.975975975975976, 1.97997997997998, 1.983983983983984, 1.987987987987988, 1.991991991991992, 1.995995995995996], [2.0, -2.0, -0.002002002002002002, 1.155155155155155, -1.3193193193193193, 1.6796796796796796, -1.7397397397397398, -0.6106106106106106, 0.6426426426426426, 1.887887887887888], [0.0, 4.0, 3.9999959919879844, 3.084537289340691, 7.36488275292736, 3.118030920568761, 7.038861956228758, 7.143962613999413, 7.199339458696, 4.549146401863414])
julia> fl = FastLeja(a, b, m)
FastLeja{Float64}([2.0, -2.0, 0.0, -1.0, 1.0, -1.5, 1.5, 0.5, -1.75, 1.75], [-3.111827946268022, -1.5140533447265625, 7.91015625, -1.3255691528320312, -3.0929946899414062, 0.6718902150169015, 1.1896133422851562, 1.2691259616985917, -1.8015846004709601, 2.6076411906e-314], [-1.875, 0.25, -0.5, 1.25, -1.25, 1.625, 0.75, -1.625, 1.875, 1.5e-323], [2, 3, 4, 5, 6, 7, 8, 9, 10, 2], [9, 8, 3, 7, 4, 10, 5, 6, 1, 4570435120])
```

## Reference
```@docs
MatrixPolynomials.Leja
MatrixPolynomials.Leja(S::AbstractVector{T}, n::Integer) where T
MatrixPolynomials.leja!
MatrixPolynomials.FastLeja
MatrixPolynomials.fast_leja!
MatrixPolynomials.points
```
## Bibliography
[^Leja]: Leja, F. (1957). Sur certaines suites liées aux ensembles
plans et leur application à la représentation conforme. Annales
Polonici Mathematici, 4(1), 8–13. [DOI:
10.4064/ap-4-1-8-13](http://dx.doi.org/10.4064/ap-4-1-8-13)
[^Reichel]: Reichel, L. (1990). Newton Interpolation At Leja
Points. BIT, 30(2), 332–346. [DOI:
10.1007/bf02017352](http://dx.doi.org/10.1007/bf02017352)
[^Baglama]: Baglama, J., Calvetti, D., & Reichel, L. (1998). Fast Leja
points. Electron. Trans. Numer. Anal, 7(124-140), 119–120. [URL:
https://elibm.org/article/10006464](https://elibm.org/article/10006464)
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | docs | 692 | # Newton polynomials
[^Kandolf]
```@docs
MatrixPolynomials.NewtonPolynomial
MatrixPolynomials.NewtonPolynomial(f::Function, ζ::AbstractVector)
MatrixPolynomials.NewtonMatrixPolynomial
LinearAlgebra.mul!(w, nmp::MatrixPolynomials.NewtonMatrixPolynomial, A, v)
```
## Error estimators
```@docs
MatrixPolynomials.NewtonMatrixPolynomialDerivative
MatrixPolynomials.φₖResidualEstimator
```
## Bibliography
[^Kandolf]: Kandolf, P., Ostermann, A., & Rainer, S. (2014). A
residual based error estimate for Leja interpolation of matrix
functions. Linear Algebra and its Applications, 456(nil),
157–173. [DOI:
10.1016/j.laa.2014.04.023](http://dx.doi.org/10.1016/j.laa.2014.04.023)
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.1.3 | bab42666bb420d4481f99e6bae9615229ead71ec | docs | 4081 | # φₖ functions
## Definition
These are defined recursively through
```math
\begin{equation}
\label{eqn:phi-k-recursive}
\varphi_0(z) \defd \ce^z, \quad
\varphi_1(z) \defd \frac{\ce^z-1}{z}, \quad
\varphi_{k+1}(z) \defd \frac{\varphi_k(z)-\varphi_k(0)}{z}, \quad
\varphi_k(0)=\frac{1}{k!}.
\end{equation}
```
An alternate definition is
```math
\begin{equation}
h^k \varphi_k(hz) = \int_0^h \diff{s}
\ce^{(h-s)z} \frac{s^{k-1}}{(k-1)!}.
\end{equation}
```
## Accuracy
### Accuracy for ``k=1``
```math
\begin{equation}
\label{eqn:phi-1-naive}
\varphi_1(z) \equiv \frac{\ce^z-1}{z}
\end{equation}
```
This is a common example of catastrophic cancellation; for small
``\abs{z}``, ``\ce^z - 1\approx 0``, and we thus divide a small number
by a small number. By employing a trick shown by e.g.
Higham, N. (2002). Accuracy and stability of numerical
algorithms. Philadelphia: Society for Industrial and Applied
Mathematics.
we can substantially improve accuracy:
```math
\begin{equation}
\label{eqn:phi-1-accurate}
\varphi_1(z) = \begin{cases}
1, & \abs{z} < \varepsilon,\\
\frac{\ce^z-1}{\log\ce^z}, & \varepsilon < \abs{z} < 1, \\
\frac{\ce^z-1}{z}, & \textrm{else}.
\end{cases}
\end{equation}
```

The solid line corresponds to the naïve implementation
``\eqref{eqn:phi-1-naive}``, whereas the dashed line corresponds to
the accurate implementation ``\eqref{eqn:phi-1-accurate}``.
### Accuracy for ``k > 1 ``
For a Taylor expansion of a function ``f(x)``, we have
```math
\begin{equation}
f(x-a) = \underbrace{\sum_{i=0}^n \frac{f^{(i)}(a)}{i!} (x-a)^i}_{\defd T_n(x)} +
\underbrace{\frac{f^{(n+1)}(\xi)}{(n+1)!}(x-a)^{n+1}}_{\defd R_n(x)}, \quad
\xi \in [a,x]
\end{equation}
```
We now Taylor expand ``\ce^{(h-s)z}`` about ``z=0``:
```math
\begin{equation}
\ce^{(h-s)z} =
\sum_{i=0}^n \frac{z^i(h-s)^i}{i!} +
\frac{\ce^{(h-s)\zeta}\zeta^{n+1}(h-s)^{n+1}}{(n+1)!},
\quad \abs{\zeta} \leq z.
\end{equation}
```
With this, we now calculate the definite integral appearing in the
definition of ``\varphi_k``:
```math
\begin{equation}
\begin{aligned}
\int_0^h\diff{s}
\ce^{(h-s)z} s^{k-1}
&=
\int_0^h\diff{s}
\sum_{i=0}^n \frac{z^i(h-s)^is^{k-1}}{i!} +
\int_0^h\diff{s}
\frac{\ce^{(h-s)\zeta}\zeta^{n+1}(h-s)^{n+1}s^{k-1}}{(n+1)!} \\
&=
\sum_{i=0}^n
\frac{z^i}{i!}
\int_0^h\diff{s}
(h-s)^is^{k-1} +
\int_0^h\diff{s}
\frac{\ce^{(h-s)\zeta}\zeta^{n+1}(h-s)^{n+1}s^{k-1}}{(n+1)!}.
\end{aligned}
\end{equation}
```
For the case we are interested in, ``h=1`` and the first integral is
equivalent to [Euler's beta function](https://en.wikipedia.org/wiki/Beta_function):
```math
\begin{equation}
\int_0^1\diff{s} s^{k-1}(1-s)^i \equiv \Beta(k,i+1) \equiv \frac{\Gamma(k)\Gamma(i+1)}{\Gamma(k+i+1)},
\end{equation}
```
which, for integer ``k,i`` has the following value
```math
\begin{equation}
\Beta(k,i+1) = \frac{(k-1)!i!}{(k+i)!}.
\end{equation}
```
Inserting this into the integral (having set ``h=1``), we find
```math
\begin{equation}
\label{eqn:phi-k-expansion}
\varphi_k(z) =
\sum_{i=0}^n
\frac{z^{i}}{(k+i)!}
+\int_0^1\diff{s} R_n(s,\zeta),
\end{equation}
```
where we have made explicit the dependence of the [Lagrange
remainder](https://en.wikipedia.org/wiki/Taylor%27s_theorem#Explicit_formulas_for_the_remainder)
``R_n(s,\zeta)`` on ``s``.
Some numerical testing seems to indicate it is enough to set ``n=k``
in the Taylor expansion ``\eqref{eqn:phi-k-expansion}`` to get
accurate evaluation of ``\phi_k(x)`` for small ``\abs{x}``,
``x\in\mathbb{R}``. For general ``z``, the amount of required terms
seems higher, so ``n`` is currently set to ``10k``.

The plot includes ``\phi_k(z)`` for ``k\in\{0..100\}``. To illustrate
the rounding errors that would occur if one were to use the recursive
definition ``\eqref{eqn:phi-k-recursive}`` directly , we plot
``\varphi_k(x)``, but for ``k\in\{0..4\}`` only:

## Reference
```@docs
MatrixPolynomials.φ₁
MatrixPolynomials.φ
```
| MatrixPolynomials | https://github.com/jagot/MatrixPolynomials.jl.git |
|
[
"MIT"
] | 0.4.3 | fb409abab2caf118986fc597ba84b50cbaf00b87 | code | 1074 | module Formatting
import Base.show
using Printf, Logging
export
FormatSpec, FormatExpr,
printfmt, printfmtln, fmt, format,
sprintf1, generate_formatter
if ccall(:jl_generating_output, Cint, ()) == 1
@warn """
DEPRECATION NOTICE
Formatting.jl has been unmaintained for a while, with some serious
correctness bugs compromising the original purpose of the package. As a result,
it has been deprecated - consider using an alternative, such as
`Format.jl` (https://github.com/JuliaString/Format.jl) or the `Printf` stdlib directly.
If you are not using Formatting.jl as a direct dependency, please consider
opening an issue on any packages you are using that do use it as a dependency.
From Julia 1.9 onwards, you can query `]why Formatting` to figure out which
package originally brings it in as a dependency.
"""
end
include("cformat.jl" )
include("fmtspec.jl")
include("fmtcore.jl")
include("formatexpr.jl")
end # module
| Formatting | https://github.com/JuliaIO/Formatting.jl.git |
|
[
"MIT"
] | 0.4.3 | fb409abab2caf118986fc597ba84b50cbaf00b87 | code | 10612 | formatters = Dict{ String, Function }()
sprintf1( fmt::String, x ) = eval(Expr(:call, generate_formatter( fmt ), x))
function checkfmt(fmt)
@static if VERSION > v"1.6.0-DEV.854"
test = Printf.Format(fmt)
length(test.formats) == 1 ||
error( "Only one AND undecorated format string is allowed")
else
test = @static VERSION >= v"1.4.0-DEV.180" ? Printf.parse(fmt) : Base.Printf.parse( fmt )
(length( test ) == 1 && typeof( test[1] ) <: Tuple) ||
error( "Only one AND undecorated format string is allowed")
end
end
function generate_formatter( fmt::String )
global formatters
haskey( formatters, fmt ) && return formatters[fmt]
if !occursin("'", fmt)
checkfmt(fmt)
formatter = @eval(x->@sprintf( $fmt, x ))
return (formatters[ fmt ] = x->Base.invokelatest(formatter, x))
end
conversion = fmt[end]
conversion in "sduifF" ||
error( string("thousand separator not defined for ", conversion, " conversion") )
fmtactual = replace( fmt, "'" => "", count=1 )
checkfmt( fmtactual )
if !occursin(conversion, "sfF")
formatter = @eval(x->checkcommas(@sprintf( $fmtactual, x )))
return (formatters[ fmt ] = x->Base.invokelatest(formatter, x))
end
formatter =
if endswith( fmtactual, 's')
@eval((x::Real)->((eltype(x) <: Rational)
? addcommasrat(@sprintf( $fmtactual, x ))
: addcommasreal(@sprintf( $fmtactual, x ))))
else
@eval((x::Real)->addcommasreal(@sprintf( $fmtactual, x )))
end
return (formatters[ fmt ] = x->Base.invokelatest(formatter, x))
end
function addcommasreal(s)
dpos = findfirst( isequal('.'), s )
dpos !== nothing && return string(addcommas( s[1:dpos-1] ), s[ dpos:end ])
# find the rightmost digit
for i in length( s ):-1:1
isdigit( s[i] ) && return string(addcommas( s[1:i] ), s[i+1:end])
end
s
end
function addcommasrat(s)
# commas are added to only the numerator
spos = findfirst( isequal('/'), s )
string(addcommas( s[1:spos-1] ), s[spos:end])
end
function checkcommas(s)
for i in length( s ):-1:1
if isdigit( s[i] )
s = string(addcommas( s[1:i] ), s[i+1:end])
break
end
end
s
end
function addcommas( s::String )
len = length(s)
t = ""
for i in 1:3:len
subs = s[max(1,len-i-1):len-i+1]
if i == 1
t = subs
else
if match( r"[0-9]", subs ) != nothing
t = subs * "," * t
else
t = subs * t
end
end
end
return t
end
function generate_format_string(;
width::Int=-1,
precision::Int= -1,
leftjustified::Bool=false,
zeropadding::Bool=false,
commas::Bool=false,
signed::Bool=false,
positivespace::Bool=false,
alternative::Bool=false,
conversion::String="f" #aAdecEfFiosxX
)
s = "%"
if commas
s *= "'"
end
if alternative && in( conversion[1], "aAeEfFoxX" )
s *= "#"
end
if zeropadding && !leftjustified && width != -1
s *= "0"
end
if signed
s *= "+"
elseif positivespace
s *= " "
end
if width != -1
if leftjustified
s *= "-" * string( width )
else
s *= string( width )
end
end
if precision != -1
s *= "." * string( precision )
end
s * conversion
end
function format( x::T;
width::Int=-1,
precision::Int= -1,
leftjustified::Bool=false,
zeropadding::Bool=false, # when right-justified, use 0 instead of space to fill
commas::Bool=false,
signed::Bool=false, # +/- prefix
positivespace::Bool=false,
stripzeros::Bool=(precision== -1),
parens::Bool=false, # use (1.00) instead of -1.00. Used in finance
alternative::Bool=false, # usually for hex
mixedfraction::Bool=false,
mixedfractionsep::AbstractString="_",
fractionsep::AbstractString="/", # num / den
fractionwidth::Int = 0,
tryden::Int = 0, # if 2 or higher, try to use this denominator, without losing precision
suffix::AbstractString="", # useful for units/%
autoscale::Symbol=:none, # :metric, :binary or :finance
conversion::String=""
) where {T<:Real}
checkwidth = commas
if conversion == ""
if T <: AbstractFloat || T <: Rational && precision != -1
actualconv = "f"
elseif T <: Unsigned
actualconv = "x"
elseif T <: Integer
actualconv = "d"
else
conversion = "s"
actualconv = "s"
end
else
actualconv = conversion
end
if signed && commas
error( "You cannot use signed (+/-) AND commas at the same time")
end
if T <: Rational && conversion == "s"
stripzeros = false
end
if ( T <: AbstractFloat && actualconv == "f" || T <: Integer ) && autoscale != :none
actualconv = "f"
if autoscale == :metric
scales = [
(1e24, "Y" ),
(1e21, "Z" ),
(1e18, "E" ),
(1e15, "P" ),
(1e12, "T" ),
(1e9, "G"),
(1e6, "M"),
(1e3, "k") ]
if abs(x) > 1
for (mag, sym) in scales
if abs(x) >= mag
x /= mag
suffix = sym * suffix
break
end
end
elseif T <: AbstractFloat
smallscales = [
( 1e-12, "p" ),
( 1e-9, "n" ),
( 1e-6, "μ" ),
( 1e-3, "m" ) ]
for (mag,sym) in smallscales
if abs(x) < mag*10
x /= mag
suffix = sym * suffix
break
end
end
end
else
if autoscale == :binary
scales = [
(1024.0 ^8, "Yi" ),
(1024.0 ^7, "Zi" ),
(1024.0 ^6, "Ei" ),
(1024.0 ^5, "Pi" ),
(1024.0 ^4, "Ti" ),
(1024.0 ^3, "Gi"),
(1024.0 ^2, "Mi"),
(1024.0, "Ki")
]
else # :finance
scales = [
(1e12, "t" ),
(1e9, "b"),
(1e6, "m"),
(1e3, "k") ]
end
for (mag, sym) in scales
if abs(x) >= mag
x /= mag
suffix = sym * suffix
break
end
end
end
end
nonneg = x >= 0
fractional = 0
if T <: Rational && mixedfraction
actualconv = "d"
actualx = trunc( Int, x )
fractional = abs(x) - abs(actualx)
else
if parens && !in( actualconv[1], "xX" )
actualx = abs(x)
else
actualx = x
end
end
s = sprintf1( generate_format_string( width=width,
precision=precision,
leftjustified=leftjustified,
zeropadding=zeropadding,
commas=commas,
signed=signed,
positivespace=positivespace,
alternative=alternative,
conversion=actualconv
),actualx)
if T <:Rational && conversion == "s"
if mixedfraction && fractional != 0
num = fractional.num
den = fractional.den
if tryden >= 2 && mod( tryden, den ) == 0
num *= div(tryden,den)
den = tryden
end
fs = string( num ) * fractionsep * string(den)
if length(fs) < fractionwidth
fs = repeat( "0", fractionwidth - length(fs) ) * fs
end
s = rstrip(s)
if actualx != 0
s = rstrip(s) * mixedfractionsep * fs
else
if !nonneg
s = "-" * fs
else
s = fs
end
end
checkwidth = true
elseif !mixedfraction
s = replace( s, "//" => fractionsep )
checkwidth = true
end
elseif stripzeros && in( actualconv[1], "fFeEs" )
dpos = findfirst( isequal('.'), s )
if in( actualconv[1], "eEs" )
if in( actualconv[1], "es" )
epos = findfirst( isequal('e'), s )
else
epos = findfirst( isequal('E'), s )
end
if epos === nothing
rpos = length( s )
else
rpos = epos-1
end
else
rpos = length(s)
end
# rpos at this point is the rightmost possible char to start
# stripping
stripfrom = rpos+1
for i = rpos:-1:dpos+1
if s[i] == '0'
stripfrom = i
elseif s[i] ==' '
continue
else
break
end
end
if stripfrom <= rpos
if stripfrom == dpos+1 # everything after decimal is 0, so strip the decimal too
stripfrom = dpos
end
s = s[1:stripfrom-1] * s[rpos+1:end]
checkwidth = true
end
end
s *= suffix
if parens && !in( actualconv[1], "xX" )
# if zero or positive, we still need 1 white space on the right
if nonneg
s = " " * strip(s) * " "
else
s = "(" * strip(s) * ")"
end
checkwidth = true
end
if checkwidth && width != -1
if length(s) > width
s = replace( s, " " => "", count=length(s)-width )
if length(s) > width && endswith( s, " " )
s = reverse( replace( reverse(s), " " => "", count=length(s)-width ) )
end
if length(s) > width
s = replace( s, "," => "", count=length(s)-width )
end
elseif length(s) < width
if leftjustified
s = s * repeat( " ", width - length(s) )
else
s = repeat( " ", width - length(s) ) * s
end
end
end
s
end
| Formatting | https://github.com/JuliaIO/Formatting.jl.git |
|
[
"MIT"
] | 0.4.3 | fb409abab2caf118986fc597ba84b50cbaf00b87 | code | 6827 | # core formatting functions
### auxiliary functions
### print char n times
function _repprint(out::IO, c::Char, n::Int)
while n > 0
print(out, c)
n -= 1
end
end
### print string or char
function _pfmt_s(out::IO, fs::FormatSpec, s::Union{AbstractString,Char})
wid = fs.width
slen = length(s)
if wid <= slen
print(out, s)
else
a = fs.align
if a == '<'
print(out, s)
_repprint(out, fs.fill, wid-slen)
else
_repprint(out, fs.fill, wid-slen)
print(out, s)
end
end
end
### print integers
_mul(x::Integer, ::_Dec) = x * 10
_mul(x::Integer, ::_Bin) = x << 1
_mul(x::Integer, ::_Oct) = x << 3
_mul(x::Integer, ::Union{_Hex, _HEX}) = x << 4
_div(x::Integer, ::_Dec) = div(x, 10)
_div(x::Integer, ::_Bin) = x >> 1
_div(x::Integer, ::_Oct) = x >> 3
_div(x::Integer, ::Union{_Hex, _HEX}) = x >> 4
function _ndigits(x::Integer, op) # suppose x is non-negative
m = 1
q = _div(x, op)
while q > 0
m += 1
q = _div(q, op)
end
return m
end
_ipre(op) = ""
_ipre(::Union{_Hex, _HEX}) = "0x"
_ipre(::_Oct) = "0o"
_ipre(::_Bin) = "0b"
_digitchar(x::Integer, ::_Bin) = Char(x == 0 ? '0' : '1')
_digitchar(x::Integer, ::_Dec) = Char('0' + x)
_digitchar(x::Integer, ::_Oct) = Char('0' + x)
_digitchar(x::Integer, ::_Hex) = Char(x < 10 ? '0' + x : 'a' + (x - 10))
_digitchar(x::Integer, ::_HEX) = Char(x < 10 ? '0' + x : 'A' + (x - 10))
_signchar(x::Real, s::Char) = signbit(x) ? '-' :
s == '+' ? '+' :
s == ' ' ? ' ' : '\0'
function _pfmt_int(out::IO, sch::Char, ip::String, zs::Integer, ax::Integer, op::Op) where {Op}
# print sign
if sch != '\0'
print(out, sch)
end
# print prefix
if !isempty(ip)
print(out, ip)
end
# print padding zeros
if zs > 0
_repprint(out, '0', zs)
end
# print actual digits
if ax == 0
print(out, '0')
else
_pfmt_intdigits(out, ax, op)
end
end
function _pfmt_intdigits(out::IO, ax::T, op::Op) where {Op, T<:Integer}
b_lb = _div(ax, op)
b = one(T)
while b <= b_lb
b = _mul(b, op)
end
r = ax
while b > 0
(q, r) = divrem(r, b)
print(out, _digitchar(q, op))
b = _div(b, op)
end
end
function _pfmt_i(out::IO, fs::FormatSpec, x::Integer, op::Op) where {Op}
# calculate actual length
ax = abs(x)
xlen = _ndigits(abs(x), op)
# sign char
sch = _signchar(x, fs.sign)
if sch != '\0'
xlen += 1
end
# prefix (e.g. 0x, 0b, 0o)
ip = ""
if fs.ipre
ip = _ipre(op)
xlen += length(ip)
end
# printing
wid = fs.width
if wid <= xlen
_pfmt_int(out, sch, ip, 0, ax, op)
elseif fs.zpad
_pfmt_int(out, sch, ip, wid-xlen, ax, op)
else
a = fs.align
if a == '<'
_pfmt_int(out, sch, ip, 0, ax, op)
_repprint(out, fs.fill, wid-xlen)
else
_repprint(out, fs.fill, wid-xlen)
_pfmt_int(out, sch, ip, 0, ax, op)
end
end
end
### print floating point numbers
function _pfmt_float(out::IO, sch::Char, zs::Integer, intv::Real, decv::Real, prec::Int)
# print sign
if sch != '\0'
print(out, sch)
end
# print padding zeros
if zs > 0
_repprint(out, '0', zs)
end
idecv = round(Integer, decv * exp10(prec))
# print integer part
if intv == 0
print(out, '0')
else
_pfmt_intdigits(out, intv, _Dec())
end
# print decimal point
print(out, '.')
# print decimal part
if prec > 0
nd = _ndigits(idecv, _Dec())
if nd < prec
_repprint(out, '0', prec - nd)
end
_pfmt_intdigits(out, idecv, _Dec())
end
end
function _pfmt_f(out::IO, fs::FormatSpec, x::AbstractFloat)
# separate sign, integer, and decimal part
rax = round(abs(x), digits = fs.prec)
sch = _signchar(x, fs.sign)
intv = trunc(Integer, rax)
decv = rax - intv
# calculate length
xlen = _ndigits(intv, _Dec()) + 1 + fs.prec
if sch != '\0'
xlen += 1
end
# print
wid = fs.width
if wid <= xlen
_pfmt_float(out, sch, 0, intv, decv, fs.prec)
elseif fs.zpad
_pfmt_float(out, sch, wid-xlen, intv, decv, fs.prec)
else
a = fs.align
if a == '<'
_pfmt_float(out, sch, 0, intv, decv, fs.prec)
_repprint(out, fs.fill, wid-xlen)
else
_repprint(out, fs.fill, wid-xlen)
_pfmt_float(out, sch, 0, intv, decv, fs.prec)
end
end
end
function _pfmt_floate(out::IO, sch::Char, zs::Integer, u::Real, prec::Int, e::Integer, ec::Char)
intv = trunc(Integer,u)
decv = u - intv
if intv == 0 && decv != 0
intv = 1
decv -= 1
end
_pfmt_float(out, sch, zs, intv, decv, prec)
print(out, ec)
if e >= 0
print(out, '+')
else
print(out, '-')
e = -e
end
if e < 10
print(out, '0')
end
_pfmt_intdigits(out, e, _Dec())
end
function _pfmt_e(out::IO, fs::FormatSpec, x::AbstractFloat)
# extract sign, significand, and exponent
ax = abs(x)
sch = _signchar(x, fs.sign)
if ax == 0.0
e = 0
u = zero(x)
else
rax = round(ax, sigdigits = fs.prec + 1)
e = floor(Integer, log10(rax)) # exponent
u = rax * exp10(-e) # significand
i = 1
while u == Inf
u = 10^i * rax * exp10(-e - i)
i += 1
end
end
# calculate length
xlen = 6 + fs.prec
if abs(e) > 99
xlen += _ndigits(abs(e), _Dec()) - 2
end
if sch != '\0'
xlen += 1
end
# print
ec = isuppercase(fs.typ) ? 'E' : 'e'
wid = fs.width
if wid <= xlen
_pfmt_floate(out, sch, 0, u, fs.prec, e, ec)
elseif fs.zpad
_pfmt_floate(out, sch, wid-xlen, u, fs.prec, e, ec)
else
a = fs.align
if a == '<'
_pfmt_floate(out, sch, 0, u, fs.prec, e, ec)
_repprint(out, fs.fill, wid-xlen)
else
_repprint(out, fs.fill, wid-xlen)
_pfmt_floate(out, sch, 0, u, fs.prec, e, ec)
end
end
end
function _pfmt_g(out::IO, fs::FormatSpec, x::AbstractFloat)
# number decomposition
ax = abs(x)
if 1.0e-4 <= ax < 1.0e6
_pfmt_f(out, fs, x)
else
_pfmt_e(out, fs, x)
end
end
function _pfmt_specialf(out::IO, fs::FormatSpec, x::AbstractFloat)
if isinf(x)
if x > 0
_pfmt_s(out, fs, "Inf")
else
_pfmt_s(out, fs, "-Inf")
end
else
@assert isnan(x)
_pfmt_s(out, fs, "NaN")
end
end
| Formatting | https://github.com/JuliaIO/Formatting.jl.git |
|
[
"MIT"
] | 0.4.3 | fb409abab2caf118986fc597ba84b50cbaf00b87 | code | 5340 | # formatting specification
# formatting specification language
#
# spec ::= [[fill]align][sign][#][0][width][,][.prec][type]
# fill ::= <any character>
# align ::= '<' | '>'
# sign ::= '+' | '-' | ' '
# width ::= <integer>
# prec ::= <integer>
# type ::= 'b' | 'c' | 'd' | 'e' | 'E' | 'f' | 'F' | 'g' | 'G' |
# 'n' | 'o' | 'x' | 'X' | 's'
#
# Please refer to http://docs.python.org/2/library/string.html#formatspec
# for more details
#
## FormatSpec type
const _numtypchars = Set(['b', 'd', 'e', 'E', 'f', 'F', 'g', 'G', 'n', 'o', 'x', 'X'])
_tycls(c::Char) =
(c == 'd' || c == 'n' || c == 'b' || c == 'o' || c == 'x') ? 'i' :
(c == 'e' || c == 'f' || c == 'g') ? 'f' :
(c == 'c') ? 'c' :
(c == 's') ? 's' :
error("Invalid type char $(c)")
struct FormatSpec
cls::Char # category: 'i' | 'f' | 'c' | 's'
typ::Char
fill::Char
align::Char
sign::Char
width::Int
prec::Int
ipre::Bool # whether to prefix 0b, 0o, or 0x
zpad::Bool # whether to do zero-padding
tsep::Bool # whether to use thousand-separator
function FormatSpec(typ::Char;
fill::Char=' ',
align::Char='\0',
sign::Char='-',
width::Int=-1,
prec::Int=-1,
ipre::Bool=false,
zpad::Bool=false,
tsep::Bool=false)
if align=='\0'
align = (typ in _numtypchars) ? '>' : '<'
end
cls = _tycls(lowercase(typ))
if cls == 'f' && prec < 0
prec = 6
end
new(cls, typ, fill, align, sign, width, prec, ipre, zpad, tsep)
end
end
function show(io::IO, fs::FormatSpec)
println(io, "$(typeof(fs))")
println(io, " cls = $(fs.cls)")
println(io, " typ = $(fs.typ)")
println(io, " fill = $(fs.fill)")
println(io, " align = $(fs.align)")
println(io, " sign = $(fs.sign)")
println(io, " width = $(fs.width)")
println(io, " prec = $(fs.prec)")
println(io, " ipre = $(fs.ipre)")
println(io, " zpad = $(fs.zpad)")
println(io, " tsep = $(fs.tsep)")
end
## parse FormatSpec from a string
const _spec_regex = r"^(.?[<>])?([ +-])?(#)?(\d+)?(,)?(.\d+)?([bcdeEfFgGnosxX])?$"
function FormatSpec(s::AbstractString)
# default spec
_fill = ' '
_align = '\0'
_sign = '-'
_width = -1
_prec = -1
_ipre = false
_zpad = false
_tsep = false
_typ = 's'
if !isempty(s)
m = match(_spec_regex, s)
if m == nothing
error("Invalid formatting spec: $(s)")
end
(a1, a2, a3, a4, a5, a6, a7) = m.captures
# a1: [[fill]align]
if a1 != nothing
if length(a1) == 1
_align = a1[1]
else
_fill = a1[1]
_align = a1[nextind(a1, 1)]
end
end
# a2: [sign]
if a2 != nothing
_sign = a2[1]
end
# a3: [#]
if a3 != nothing
_ipre = true
end
# a4: [0][width]
if a4 != nothing
if a4[1] == '0'
_zpad = true
if length(a4) > 1
_width = parse(Int,a4[2:end])
end
else
_width = parse(Int,a4)
end
end
# a5: [,]
if a5 != nothing
_tsep = true
end
# a6 [.prec]
if a6 != nothing
_prec = parse(Int,a6[2:end])
end
# a7: [type]
if a7 != nothing
_typ = a7[1]
end
end
return FormatSpec(_typ;
fill=_fill,
align=_align,
sign=_sign,
width=_width,
prec=_prec,
ipre=_ipre,
zpad=_zpad,
tsep=_tsep)
end
## formatted printing using a format spec
mutable struct _Dec end
mutable struct _Oct end
mutable struct _Hex end
mutable struct _HEX end
mutable struct _Bin end
_srepr(x) = repr(x)
_srepr(x::AbstractString) = x
_srepr(x::Char) = string(x)
_srepr(x::Enum) = string(x)
_toint(x) = Integer(x)
_toint(x::AbstractString) = parse(Int, x)
_tofloat(x) = float(x)
_tofloat(x::AbstractString) = parse(Float64, x)
function printfmt(io::IO, fs::FormatSpec, x)
cls = fs.cls
ty = fs.typ
if cls == 'i'
ix = _toint(x)
ty == 'd' || ty == 'n' ? _pfmt_i(io, fs, ix, _Dec()) :
ty == 'x' ? _pfmt_i(io, fs, ix, _Hex()) :
ty == 'X' ? _pfmt_i(io, fs, ix, _HEX()) :
ty == 'o' ? _pfmt_i(io, fs, ix, _Oct()) :
_pfmt_i(io, fs, ix, _Bin())
elseif cls == 'f'
fx = _tofloat(x)
if isfinite(fx)
ty == 'f' || ty == 'F' ? _pfmt_f(io, fs, fx) :
ty == 'e' || ty == 'E' ? _pfmt_e(io, fs, fx) :
error("format for type g or G is not supported yet (use f or e instead).")
else
_pfmt_specialf(io, fs, fx)
end
elseif cls == 's'
_pfmt_s(io, fs, _srepr(x))
else # cls == 'c'
_pfmt_s(io, fs, Char(x))
end
end
printfmt(fs::FormatSpec, x) = printfmt(stdout, fs, x)
fmt(fs::FormatSpec, x) = sprint(printfmt, fs, x)
fmt(spec::AbstractString, x) = fmt(FormatSpec(spec), x)
| Formatting | https://github.com/JuliaIO/Formatting.jl.git |
|
[
"MIT"
] | 0.4.3 | fb409abab2caf118986fc597ba84b50cbaf00b87 | code | 4721 | # formatting expression
### Argument specification
struct ArgSpec
argidx::Int
hasfilter::Bool
filter::Function
function ArgSpec(idx::Int, hasfil::Bool, filter::Function)
idx != 0 || error("Argument index cannot be zero.")
new(idx, hasfil, filter)
end
end
getarg(args, sp::ArgSpec) =
(a = args[sp.argidx]; sp.hasfilter ? sp.filter(a) : a)
# pos > 0: must not have iarg in expression (use pos+1), return (entry, pos + 1)
# pos < 0: must have iarg in expression, return (entry, -1)
# pos = 0: no positional argument before, can be either, return (entry, 1) or (entry, -1)
function make_argspec(s::AbstractString, pos::Int)
# for argument position
iarg::Int = -1
hasfil::Bool = false
ff::Function = Base.identity
if !isempty(s)
filrange = findfirst("|>", s)
if filrange === nothing
iarg = parse(Int,s)
else
ifil = first(filrange)
iarg = ifil > 1 ? parse(Int,s[1:prevind(s, ifil)]) : -1
hasfil = true
ff = eval(Symbol(s[ifil+2:end]))
end
end
if pos > 0
iarg < 0 || error("entry with and without argument index must not coexist.")
iarg = (pos += 1)
elseif pos < 0
iarg > 0 || error("entry with and without argument index must not coexist.")
else # pos == 0
if iarg < 0
iarg = pos = 1
else
pos = -1
end
end
return (ArgSpec(iarg, hasfil, ff), pos)
end
### Format entry
struct FormatEntry
argspec::ArgSpec
spec::FormatSpec
end
function make_formatentry(s::AbstractString, pos::Int)
@assert s[1] == '{' && s[end] == '}'
sc = s[2:prevind(s, lastindex(s))]
icolon = findfirst(isequal(':'), sc)
if icolon === nothing # no colon
(argspec, pos) = make_argspec(sc, pos)
spec = FormatSpec('s')
else
(argspec, pos) = make_argspec(sc[1:prevind(sc, icolon)], pos)
spec = FormatSpec(sc[nextind(sc, icolon):end])
end
return (FormatEntry(argspec, spec), pos)
end
### Format expression
mutable struct FormatExpr
prefix::String
suffix::String
entries::Vector{FormatEntry}
inter::Vector{String}
end
_raise_unmatched_lbrace() = error("Unmatched { in format expression.")
function find_next_entry_open(s::AbstractString, si::Int)
slen = lastindex(s)
p = findnext(isequal('{'), s, si)
(p === nothing || p < slen) || _raise_unmatched_lbrace()
while p !== nothing && s[p+1] == '{' # escape `{{`
p = findnext(isequal('{'), s, p+2)
(p === nothing || p < slen) || _raise_unmatched_lbrace()
end
# println("open at $p")
pre = p !== nothing ? s[si:prevind(s, p)] : s[si:end]
if !isempty(pre)
pre = replace(pre, "{{" => '{')
pre = replace(pre, "}}" => '}')
end
return (p, convert(String, pre))
end
function find_next_entry_close(s::AbstractString, si::Int)
p = findnext(isequal('}'), s, si)
p !== nothing || _raise_unmatched_lbrace()
# println("close at $p")
return p
end
function FormatExpr(s::AbstractString)
# init
prefix = ""
suffix = ""
entries = FormatEntry[]
inter = String[]
# scan
(p, prefix) = find_next_entry_open(s, 1)
if p !== nothing
q = find_next_entry_close(s, p+1)
(e, pos) = make_formatentry(s[p:q], 0)
push!(entries, e)
(p, pre) = find_next_entry_open(s, q+1)
while p !== nothing
push!(inter, pre)
q = find_next_entry_close(s, p+1)
(e, pos) = make_formatentry(s[p:q], pos)
push!(entries, e)
(p, pre) = find_next_entry_open(s, q+1)
end
suffix = pre
end
FormatExpr(prefix, suffix, entries, inter)
end
function printfmt(io::IO, fe::FormatExpr, args...)
if !isempty(fe.prefix)
print(io, fe.prefix)
end
ents = fe.entries
ne = length(ents)
if ne > 0
e = ents[1]
printfmt(io, e.spec, getarg(args, e.argspec))
for i = 2:ne
print(io, fe.inter[i-1])
e = ents[i]
printfmt(io, e.spec, getarg(args, e.argspec))
end
end
if !isempty(fe.suffix)
print(io, fe.suffix)
end
end
printfmt(io::IO, fe::AbstractString, args...) = printfmt(io, FormatExpr(fe), args...)
printfmt(fe::Union{AbstractString,FormatExpr}, args...) = printfmt(stdout, fe, args...)
printfmtln(io::IO, fe::Union{AbstractString,FormatExpr}, args...) = (printfmt(io, fe, args...); println(io))
printfmtln(fe::Union{AbstractString,FormatExpr}, args...) = printfmtln(stdout, fe, args...)
format(fe::Union{AbstractString,FormatExpr}, args...) =
sprint(printfmt, fe, args...)
| Formatting | https://github.com/JuliaIO/Formatting.jl.git |
|
[
"MIT"
] | 0.4.3 | fb409abab2caf118986fc597ba84b50cbaf00b87 | code | 7253 | using Formatting
using Test
using Printf
using Random
_erfinv(z) = sqrt(π) * Base.Math.@horner(z, 0, 1, 0, π/12, 0, 7π^2/480, 0, 127π^3/40320, 0,
4369π^4/5806080, 0, 34807π^5/182476800) / 2
function test_equality()
println( "test cformat equality...")
Random.seed!( 10 )
fmts = [ (x->@sprintf("%10.4f",x), "%10.4f"),
(x->@sprintf("%f", x), "%f"),
(x->@sprintf("%e", x), "%e"),
(x->@sprintf("%10f", x), "%10f"),
(x->@sprintf("%.3f", x), "%.3f"),
(x->@sprintf("%.3e", x), "%.3e")]
for (mfmtr,fmt) in fmts
for i in 1:10000
n = _erfinv( rand() * 1.99 - 1.99/2.0 )
expect = mfmtr( n )
actual = sprintf1( fmt, n )
@test expect == actual
end
end
fmts = [ (x->@sprintf("%d",x), "%d"),
(x->@sprintf("%10d",x), "%10d"),
(x->@sprintf("%010d",x), "%010d"),
(x->@sprintf("%-10d",x), "%-10d")]
for (mfmtr,fmt) in fmts
for i in 1:10000
j = round(Int, _erfinv( rand() * 1.99 - 1.99/2.0 ) * 100000 )
expect = mfmtr( j )
actual = sprintf1( fmt, j )
@test expect == actual
end
end
println( "...Done" )
end
@time test_equality()
println( "\nTest speed" )
function native_int()
for i in 1:200000
@sprintf( "%10d", i )
end
end
function runtime_int()
for i in 1:200000
sprintf1( "%10d", i )
end
end
function runtime_int_bypass()
f = generate_formatter( "%10d" )
for i in 1:200000
f( i )
end
end
println( "integer @sprintf speed")
@time native_int()
println( "integer sprintf speed")
@time runtime_int()
println( "integer sprintf speed, bypass repeated lookup")
@time runtime_int_bypass()
function native_float()
Random.seed!( 10 )
for i in 1:200000
@sprintf( "%10.4f", _erfinv( rand() ) )
end
end
function runtime_float()
Random.seed!( 10 )
for i in 1:200000
sprintf1( "%10.4f", _erfinv( rand() ) )
end
end
function runtime_float_bypass()
f = generate_formatter( "%10.4f" )
Random.seed!( 10 )
for i in 1:200000
f( _erfinv( rand() ) )
end
end
println()
println( "float64 @sprintf speed")
@time native_float()
println( "float64 sprintf speed")
@time runtime_float()
println( "float64 sprintf speed, bypass repeated lookup")
@time runtime_float_bypass()
function test_commas()
println( "\ntest commas..." )
@test sprintf1( "%'d", 1000 ) == "1,000"
@test sprintf1( "%'d", -1000 ) == "-1,000"
@test sprintf1( "%'d", 100 ) == "100"
@test sprintf1( "%'d", -100 ) == "-100"
@test sprintf1( "%'f", Inf ) == "Inf"
@test sprintf1( "%'f", -Inf ) == "-Inf"
@test sprintf1( "%'s", 1000.0 ) == "1,000.0"
@test sprintf1( "%'s", 1234567.0 ) == "1.234567e6"
end
function test_format()
println( "test format...")
@test format( 10 ) == "10"
@test format( 10.0 ) == "10"
@test format( 10.0, precision=2 ) == "10.00"
@test format( 111//100, precision=2 ) == "1.11"
@test format( 111//100 ) == "111/100"
@test format( 1234, commas=true ) == "1,234"
@test format( 1234, conversion="f", precision=2 ) == "1234.00"
@test format( 1.23, precision=3 ) == "1.230"
@test format( 1.23, precision=3, stripzeros=true ) == "1.23"
@test format( 1.00, precision=3, stripzeros=true ) == "1"
@test format( 1.0, conversion="e", stripzeros=true ) == "1e+00"
@test format( 1.0, conversion="e", precision=4 ) == "1.0000e+00"
# hex output
@test format( 1118, conversion="x" ) == "45e"
@test format( 1118, width=4, conversion="x" ) == " 45e"
@test format( 1118, width=4, zeropadding=true, conversion="x" ) == "045e"
@test format( 1118, alternative=true, conversion="x" ) == "0x45e"
@test format( 1118, width=4, alternative=true, conversion="x" ) == "0x45e"
@test format( 1118, width=6, alternative=true, conversion="x", zeropadding=true ) == "0x045e"
# mixed fractions
@test format( 3//2, mixedfraction=true ) == "1_1/2"
@test format( -3//2, mixedfraction=true ) == "-1_1/2"
@test format( 3//100, mixedfraction=true ) == "3/100"
@test format( -3//100, mixedfraction=true ) == "-3/100"
@test format( 307//100, mixedfraction=true ) == "3_7/100"
@test format( -307//100, mixedfraction=true ) == "-3_7/100"
@test format( 307//100, mixedfraction=true, fractionwidth=6 ) == "3_07/100"
@test format( -307//100, mixedfraction=true, fractionwidth=6 ) == "-3_07/100"
@test format( -302//100, mixedfraction=true ) == "-3_1/50"
# try to make the denominator 100
@test format( -302//100, mixedfraction=true,tryden = 100 ) == "-3_2/100"
@test format( -302//30, mixedfraction=true,tryden = 100 ) == "-10_1/15" # lose precision otherwise
@test format( -302//100, mixedfraction=true,tryden = 100,fractionwidth=6 ) == "-3_02/100" # lose precision otherwise
#commas
@test format( 12345678, width=10, commas=true ) == "12,345,678"
# it would try to squeeze out the commas
@test format( 12345678, width=9, commas=true ) == "12345,678"
# until it can't anymore
@test format( 12345678, width=8, commas=true ) == "12345678"
@test format( 12345678, width=7, commas=true ) == "12345678"
# only the numerator would have commas
@test format( 1111//1000, commas=true ) == "1,111/1000"
# this shows how, with enough space, parens line up with empty spaces
@test format( 12345678, width=12, commas=true, parens=true )== " 12,345,678 "
@test format( -12345678, width=12, commas=true, parens=true )== "(12,345,678)"
# same with unspecified width
@test format( 12345678, commas=true, parens=true )== " 12,345,678 "
@test format( -12345678, commas=true, parens=true )== "(12,345,678)"
@test format( 1.2e9, autoscale = :metric ) == "1.2G"
@test format( 1.2e6, autoscale = :metric ) == "1.2M"
@test format( 1.2e3, autoscale = :metric ) == "1.2k"
@test format( 1.2e-6, autoscale = :metric ) == "1.2μ"
@test format( 1.2e-9, autoscale = :metric ) == "1.2n"
@test format( 1.2e-12, autoscale = :metric ) == "1.2p"
@test format( 1.2e9, autoscale = :finance ) == "1.2b"
@test format( 1.2e6, autoscale = :finance ) == "1.2m"
@test format( 1.2e3, autoscale = :finance ) == "1.2k"
@test format( 0x40000000, autoscale = :binary ) == "1Gi"
@test format( 0x100000, autoscale = :binary ) == "1Mi"
@test format( 0x800, autoscale = :binary ) == "2Ki"
@test format( 0x400, autoscale = :binary ) == "1Ki"
@test format( 100.00, precision=2, suffix="%" ) == "100.00%"
@test format( 100, precision=2, suffix="%" ) == "100%"
@test format( 100, precision=2, suffix="%", conversion="f" ) == "100.00%"
end
test_commas()
test_format()
function test_generate_formatter()
fmt = generate_formatter( "%7.2f" )
@test fmt( 1.234 ) == " 1.23"
@test fmt( π ) == " 3.14"
fmt = generate_formatter( "%'10.2f" )
@test fmt( 1234.5678 ) == " 1,234.57" # BUG 1 extra space
fmt = generate_formatter( "%'10d" )
@test fmt( 1234567 ) == " 1,234,567" # BUG 2 extra spaces
end
test_generate_formatter()
| Formatting | https://github.com/JuliaIO/Formatting.jl.git |
|
[
"MIT"
] | 0.4.3 | fb409abab2caf118986fc597ba84b50cbaf00b87 | code | 6859 | # test format spec parsing
using Formatting
using Test
# default spec
fs = FormatSpec("")
@test fs.typ == 's'
@test fs.fill == ' '
@test fs.align == '<'
@test fs.sign == '-'
@test fs.width == -1
@test fs.prec == -1
@test fs.ipre == false
@test fs.zpad == false
@test fs.tsep == false
# more cases
fs = FormatSpec("d")
@test fs == FormatSpec('d')
@test fs.align == '>'
@test FormatSpec("8x") == FormatSpec('x'; width=8)
@test FormatSpec("08b") == FormatSpec('b'; width=8, zpad=true)
@test FormatSpec("12f") == FormatSpec('f'; width=12, prec=6)
@test FormatSpec("12.7f") == FormatSpec('f'; width=12, prec=7)
@test FormatSpec("+08o") == FormatSpec('o'; width=8, zpad=true, sign='+')
@test FormatSpec("8") == FormatSpec('s'; width=8)
@test FormatSpec(".6f") == FormatSpec('f'; prec=6)
@test FormatSpec("<8d") == FormatSpec('d'; width=8, align='<')
@test FormatSpec("#<8d") == FormatSpec('d'; width=8, fill='#', align='<')
@test FormatSpec("⋆<8d") == FormatSpec('d'; width=8, fill='⋆', align='<')
@test FormatSpec("#8,d") == FormatSpec('d'; width=8, ipre=true, tsep=true)
# format string
@test fmt("", "abc") == "abc"
@test fmt("", "αβγ") == "αβγ"
@test fmt("s", "abc") == "abc"
@test fmt("s", "αβγ") == "αβγ"
@test fmt("2s", "abc") == "abc"
@test fmt("2s", "αβγ") == "αβγ"
@test fmt("5s", "abc") == "abc "
@test fmt("5s", "αβγ") == "αβγ "
@test fmt(">5s", "abc") == " abc"
@test fmt(">5s", "αβγ") == " αβγ"
@test fmt("*>5s", "abc") == "**abc"
@test fmt("⋆>5s", "αβγ") == "⋆⋆αβγ"
@test fmt("*<5s", "abc") == "abc**"
@test fmt("⋆<5s", "αβγ") == "αβγ⋆⋆"
# format char
@test fmt("", 'c') == "c"
@test fmt("", 'γ') == "γ"
@test fmt("c", 'c') == "c"
@test fmt("c", 'γ') == "γ"
@test fmt("3c", 'c') == "c "
@test fmt("3c", 'γ') == "γ "
@test fmt(">3c", 'c') == " c"
@test fmt(">3c", 'γ') == " γ"
@test fmt("*>3c", 'c') == "**c"
@test fmt("⋆>3c", 'γ') == "⋆⋆γ"
@test fmt("*<3c", 'c') == "c**"
@test fmt("⋆<3c", 'γ') == "γ⋆⋆"
# format integer
@test fmt("", 1234) == "1234"
@test fmt("d", 1234) == "1234"
@test fmt("n", 1234) == "1234"
@test fmt("x", 0x2ab) == "2ab"
@test fmt("X", 0x2ab) == "2AB"
@test fmt("o", 0o123) == "123"
@test fmt("b", 0b1101) == "1101"
@test fmt("d", 0) == "0"
@test fmt("d", 9) == "9"
@test fmt("d", 10) == "10"
@test fmt("d", 99) == "99"
@test fmt("d", 100) == "100"
@test fmt("d", 1000) == "1000"
@test fmt("06d", 123) == "000123"
@test fmt("+6d", 123) == " +123"
@test fmt("+06d", 123) == "+00123"
@test fmt(" d", 123) == " 123"
@test fmt(" 6d", 123) == " 123"
@test fmt("<6d", 123) == "123 "
@test fmt(">6d", 123) == " 123"
@test fmt("*<6d", 123) == "123***"
@test fmt("⋆<6d", 123) == "123⋆⋆⋆"
@test fmt("*>6d", 123) == "***123"
@test fmt("⋆>6d", 123) == "⋆⋆⋆123"
@test fmt("< 6d", 123) == " 123 "
@test fmt("<+6d", 123) == "+123 "
@test fmt("> 6d", 123) == " 123"
@test fmt(">+6d", 123) == " +123"
@test fmt("+d", -123) == "-123"
@test fmt("-d", -123) == "-123"
@test fmt(" d", -123) == "-123"
@test fmt("06d", -123) == "-00123"
@test fmt("<6d", -123) == "-123 "
@test fmt(">6d", -123) == " -123"
# format floating point (f)
@test fmt("", 0.125) == "0.125"
@test fmt("f", 0.0) == "0.000000"
@test fmt("f", -0.0) == "-0.000000"
@test fmt("f", 0.001) == "0.001000"
@test fmt("f", 0.125) == "0.125000"
@test fmt("f", 1.0/3) == "0.333333"
@test fmt("f", 1.0/6) == "0.166667"
@test fmt("f", -0.125) == "-0.125000"
@test fmt("f", -1.0/3) == "-0.333333"
@test fmt("f", -1.0/6) == "-0.166667"
@test fmt("f", 1234.5678) == "1234.567800"
@test fmt("8f", 1234.5678) == "1234.567800"
@test fmt("8.2f", 8.376) == " 8.38"
@test fmt("<8.2f", 8.376) == "8.38 "
@test fmt(">8.2f", 8.376) == " 8.38"
@test fmt("8.2f", -8.376) == " -8.38"
@test fmt("<8.2f", -8.376) == "-8.38 "
@test fmt(">8.2f", -8.376) == " -8.38"
@test fmt("<08.2f", 8.376) == "00008.38"
@test fmt(">08.2f", 8.376) == "00008.38"
@test fmt("<08.2f", -8.376) == "-0008.38"
@test fmt(">08.2f", -8.376) == "-0008.38"
@test fmt("*<8.2f", 8.376) == "8.38****"
@test fmt("⋆<8.2f", 8.376) == "8.38⋆⋆⋆⋆"
@test fmt("*>8.2f", 8.376) == "****8.38"
@test fmt("⋆>8.2f", 8.376) == "⋆⋆⋆⋆8.38"
@test fmt("*<8.2f", -8.376) == "-8.38***"
@test fmt("⋆<8.2f", -8.376) == "-8.38⋆⋆⋆"
@test fmt("*>8.2f", -8.376) == "***-8.38"
@test fmt("⋆>8.2f", -8.376) == "⋆⋆⋆-8.38"
@test fmt(".2f", 0.999) == "1.00"
@test fmt(".2f", 0.996) == "1.00"
@test fmt("6.2f", 9.999) == " 10.00"
# Floating point error can upset this one (i.e. 0.99500000 or 0.994999999)
@test (fmt(".2f", 0.995) == "1.00" || fmt(".2f", 0.995) == "0.99")
@test fmt(".2f", 0.994) == "0.99"
# format floating point (e)
@test fmt("E", 0.0) == "0.000000E+00"
@test fmt("e", 0.0) == "0.000000e+00"
@test fmt("e", 0.001) == "1.000000e-03"
@test fmt("e", 0.125) == "1.250000e-01"
@test fmt("e", 100/3) == "3.333333e+01"
@test fmt("e", -0.125) == "-1.250000e-01"
@test fmt("e", -100/6) == "-1.666667e+01"
@test fmt("e", 1234.5678) == "1.234568e+03"
@test fmt("8e", 1234.5678) == "1.234568e+03"
@test fmt("<12.2e", 13.89) == "1.39e+01 "
@test fmt(">12.2e", 13.89) == " 1.39e+01"
@test fmt("*<12.2e", 13.89) == "1.39e+01****"
@test fmt("⋆<12.2e", 13.89) == "1.39e+01⋆⋆⋆⋆"
@test fmt("*>12.2e", 13.89) == "****1.39e+01"
@test fmt("⋆>12.2e", 13.89) == "⋆⋆⋆⋆1.39e+01"
@test fmt("012.2e", 13.89) == "00001.39e+01"
@test fmt("012.2e", -13.89) == "-0001.39e+01"
@test fmt("+012.2e", 13.89) == "+0001.39e+01"
@test fmt(".1e", 0.999) == "1.0e+00"
@test fmt(".1e", 0.996) == "1.0e+00"
# Floating point error can upset this one (i.e. 0.99500000 or 0.994999999)
@test (fmt(".1e", 0.995) == "1.0e+00" || fmt(".1e", 0.995) == "9.9e-01")
@test fmt(".1e", 0.994) == "9.9e-01"
@test fmt(".1e", 0.6) == "6.0e-01"
@test fmt(".1e", 0.9) == "9.0e-01"
# issue #61
@test fmt("1.0e", 1e-21) == "1.e-21"
@test fmt("1.1e", 1e-21) == "1.0e-21"
@test fmt("10.2e", 1.2e100) == " 1.20e+100"
@test fmt("11.2e", BigFloat("1.2e1000")) == " 1.20e+1000"
@test fmt("11.2e", BigFloat("1.2e-1000")) == " 1.20e-1000"
@test fmt("9.2e", 9.999e9) == " 1.00e+10"
@test fmt("10.2e", 9.999e99) == " 1.00e+100"
@test fmt("11.2e", BigFloat("9.999e999")) == " 1.00e+1000"
@test fmt("10.2e", -9.999e-100) == " -1.00e-99"
# issue #84
@test fmt("+11.3e", 1.0e-309) == "+1.000e-309"
@test fmt("+11.3e", 1.0e-313) == "+1.000e-313"
# format special floating point value
@test fmt("f", NaN) == "NaN"
@test fmt("e", NaN) == "NaN"
@test fmt("f", NaN32) == "NaN"
@test fmt("e", NaN32) == "NaN"
@test fmt("f", Inf) == "Inf"
@test fmt("e", Inf) == "Inf"
@test fmt("f", Inf32) == "Inf"
@test fmt("e", Inf32) == "Inf"
@test fmt("f", -Inf) == "-Inf"
@test fmt("e", -Inf) == "-Inf"
@test fmt("f", -Inf32) == "-Inf"
@test fmt("e", -Inf32) == "-Inf"
@test fmt("<5f", Inf) == "Inf "
@test fmt(">5f", Inf) == " Inf"
@test fmt("*<5f", Inf) == "Inf**"
@test fmt("⋆<5f", Inf) == "Inf⋆⋆"
@test fmt("*>5f", Inf) == "**Inf"
@test fmt("⋆>5f", Inf) == "⋆⋆Inf"
| Formatting | https://github.com/JuliaIO/Formatting.jl.git |
|
[
"MIT"
] | 0.4.3 | fb409abab2caf118986fc597ba84b50cbaf00b87 | code | 1847 | using Formatting
using Test
# with positional arguments
@test format("{1}", 10) == "10"
@test format("abc {1}", 10) == "abc 10"
@test format("αβγ {1}", 10) == "αβγ 10"
@test format("{1} efg", 10) == "10 efg"
@test format("{1} ϵζη", 10) == "10 ϵζη"
@test format("abc {1} efg", 10) == "abc 10 efg"
@test format("αβγ{1}ϵζη", 10) == "αβγ10ϵζη"
@test format("{1} + {2}", 10, "xyz") == "10 + xyz"
@test format("{1} + {2}", 10, "χψω") == "10 + χψω"
@test format("abc {1} + {2}", 10, "xyz") == "abc 10 + xyz"
@test format("αβγ {1} + {2}", 10, "χψω") == "αβγ 10 + χψω"
@test format("{1} + {2} efg", 10, "xyz") == "10 + xyz efg"
@test format("{1} + {2} ϵζη", 10, "χψω") == "10 + χψω ϵζη"
@test format("abc {1} + {2} efg", 10, "xyz") == "abc 10 + xyz efg"
@test format("αβγ {1} + {2} ϵζη", 10, "χψω") == "αβγ 10 + χψω ϵζη"
@test format("αβγ {1}{2} ϵζη", 10, "χψω") == "αβγ 10χψω ϵζη"
@test format("{1:d} + {2:s}", 10, "xyz") == "10 + xyz"
@test format("{1:d} + {2:s}", 10, "χψω") == "10 + χψω"
@test format("{1:04d} + {2:*>5}", 10, "xyz") == "0010 + **xyz"
@test format("{1:04d} + {2:⋆>5}", 10, "χψω") == "0010 + ⋆⋆χψω"
@test format("let {2:<5} := {1:.4f};", 12.3, "χψω") == "let χψω := 12.3000;"
@test format("{}", 10) == "10"
@test format("{} + {}", 10, 20) == "10 + 20"
@test format("{} + {:04d}", 10, 20) == "10 + 0020"
@test format("{:03d} + {}", 10, 20) == "010 + 20"
@test format("{:03d} + {:04d}", 10, 20) == "010 + 0020"
@test_throws(ErrorException, format("{1} + {}", 10, 20) )
@test_throws(ErrorException, format("{} + {1}", 10, 20) )
# escape {{ and }}
@test format("{{}}") == "{}"
@test format("{{{1}}}", 10) == "{10}"
@test format("v: {{{2}}} = {1:.4f}", 1.2, "ab") == "v: {ab} = 1.2000"
@test format("χ: {{{2}}} = {1:.4f}", 1.2, "αβ") == "χ: {αβ} = 1.2000"
# with filter
@test format("{1|>abs2} + {2|>abs2:.2f}", 2, 3) == "4 + 9.00"
| Formatting | https://github.com/JuliaIO/Formatting.jl.git |
|
[
"MIT"
] | 0.4.3 | fb409abab2caf118986fc597ba84b50cbaf00b87 | code | 495 | # performance testing
using Formatting
# performance of format parsing
fp1() = FormatExpr("{1} + {2} + {3}")
fp1()
@time for i=1:10000; fp1(); end
fp2() = FormatExpr("abc {1:*<5d} efg {2:*>8.4e} hij")
fp2()
@time for i=1:10000; fp2(); end
# performance of string formatting
const fe1 = fp1()
const fe2 = fp2()
sf1(x, y, z) = format(fe1, x, y, z)
sf1(10, 20, 30)
@time for i=1:10000; sf1(10, 20, 30); end
sf2(x, y) = format(fe2, x, y)
sf2(10, 20)
@time for i=1:10000; sf2(10, 20); end
| Formatting | https://github.com/JuliaIO/Formatting.jl.git |
|
[
"MIT"
] | 0.4.3 | fb409abab2caf118986fc597ba84b50cbaf00b87 | code | 104 | using Formatting
using Test
include( "cformat.jl" )
include( "fmtspec.jl" )
include( "formatexpr.jl" )
| Formatting | https://github.com/JuliaIO/Formatting.jl.git |
|
[
"MIT"
] | 0.4.3 | fb409abab2caf118986fc597ba84b50cbaf00b87 | docs | 10096 | > [!WARNING]
> This package is unmaintained: Active work is not occuring on this package. For an active fork of the package, see [Format.jl](https://github.com/JuliaString/Format.jl).
# Formatting
This package offers Python-style general formatting and c-style numerical formatting (for speed).
| **PackageEvaluator** | **Build Status** | **Repo Status**
|:---------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------:|:---------------------------------------------------------------:|
|[![][pkg-0.6-img]][pkg-0.6-url] | [![][travis-img]][travis-url] [![][appveyor-img]][appveyor-url] [![][codecov-img]][codecov-url] | [](https://www.repostatus.org/#unsupported) |
[travis-img]: https://travis-ci.org/JuliaIO/Formatting.jl.svg?branch=master
[travis-url]: https://travis-ci.org/JuliaIO/Formatting.jl
[appveyor-img]: https://ci.appveyor.com/api/projects/status/all0t7gefcl5dgv1/branch/master?svg=true
[appveyor-url]: https://ci.appveyor.com/project/jmkuhn/formatting-jl/branch/master
[codecov-img]: https://codecov.io/gh/JuliaIO/Formatting.jl/branch/master/graph/badge.svg
[codecov-url]: https://codecov.io/gh/JuliaIO/Formatting.jl
[pkg-0.6-img]: http://pkg.julialang.org/badges/Formatting_0.6.svg
[pkg-0.6-url]: http://pkg.julialang.org/?pkg=Formatting
---------------
## Getting Started
This package is pure Julia. Setting up this package is like setting up other Julia packages:
```julia
Pkg.add("Formatting")
```
To start using the package, you can simply write
```julia
using Formatting
```
This package depends on Julia of version 0.7 or above. It has no other dependencies. The package is MIT-licensed.
## Python-style Types and Functions
#### Types to Represent Formats
This package has two types ``FormatSpec`` and ``FormatExpr`` to represent a format specification.
In particular, ``FormatSpec`` is used to capture the specification of a single entry. One can compile a format specification string into a ``FormatSpec`` instance as
```julia
fspec = FormatSpec("d")
fspec = FormatSpec("<8.4f")
```
Please refer to [Python's format specification language](http://docs.python.org/2/library/string.html#formatspec) for details.
``FormatExpr`` captures a formatting expression that may involve multiple items. One can compile a formatting string into a ``FormatExpr`` instance as
```julia
fe = FormatExpr("{1} + {2}")
fe = FormatExpr("{1:d} + {2:08.4e} + {3|>abs2}")
```
Please refer to [Python's format string syntax](http://docs.python.org/2/library/string.html#format-string-syntax) for details.
**Note:** If the same format is going to be applied for multiple times. It is more efficient to first compile it.
#### Formatted Printing
One can use ``printfmt`` and ``printfmtln`` for formatted printing:
- **printfmt**(io, fe, args...)
- **printfmt**(fe, args...)
Print given arguments using given format ``fe``. Here ``fe`` can be a formatting string, an instance of ``FormatSpec`` or ``FormatExpr``.
**Examples**
```julia
printfmt("{1:>4s} + {2:.2f}", "abc", 12) # --> print(" abc + 12.00")
printfmt("{} = {:#04x}", "abc", 12) # --> print("abc = 0x0c")
fs = FormatSpec("#04x")
printfmt(fs, 12) # --> print("0x0c")
fe = FormatExpr("{} = {:#04x}")
printfmt(fe, "abc", 12) # --> print("abc = 0x0c")
```
**Notes**
If the first argument is a string, it will be first compiled into a ``FormatExpr``, which implies that you can not use specification-only string in the first argument.
```julia
printfmt("{1:d}", 10) # OK, "{1:d}" can be compiled into a FormatExpr instance
printfmt("d", 10) # Error, "d" can not be compiled into a FormatExpr instance
# such a string to specify a format specification for single argument
printfmt(FormatSpec("d"), 10) # OK
printfmt(FormatExpr("{1:d}", 10)) # OK
```
- **printfmtln**(io, fe, args...)
- **printfmtln**(fe, args...)
Similar to ``printfmt`` except that this function print a newline at the end.
#### Formatted String
One can use ``fmt`` to format a single value into a string, or ``format`` to format one to multiple arguments into a string using an format expression.
- **fmt**(fspec, a)
Format a single value using a format specification given by ``fspec``, where ``fspec`` can be either a string or an instance of ``FormatSpec``.
- **format**(fe, args...)
Format arguments using a format expression given by ``fe``, where ``fe`` can be either a string or an instance of ``FormatSpec``.
#### Difference from Python's Format
At this point, this package implements a subset of Python's formatting language (with slight modification). Here is a summary of the differences:
- ``g`` and ``G`` for floating point formatting have not been supported yet. Please use ``f``, ``e``, or ``E`` instead.
- The package currently provides default alignment, left alignment ``<`` and right alignment ``>``. Other form of alignment such as centered alignment ``^`` has not been supported yet.
- In terms of argument specification, it supports natural ordering (e.g. ``{} + {}``), explicit position (e.g. ``{1} + {2}``). It hasn't supported named arguments or fields extraction yet. Note that mixing these two modes is not allowed (e.g. ``{1} + {}``).
- The package provides support for filtering (for explicitly positioned arguments), such as ``{1|>lowercase}`` by allowing one to embed the ``|>`` operator, which the Python counter part does not support.
## C-style functions
The c-style part of this package aims to get around the limitation that
`@sprintf` has to take a literal string argument.
The core part is basically a c-style print formatter using the standard
`@sprintf` macro.
It also adds functionalities such as commas separator (thousands), parenthesis for negatives,
stripping trailing zeros, and mixed fractions.
### Usage and Implementation
The idea here is that the package compiles a function only once for each unique
format string within the `Formatting.*` name space, so repeated use is faster.
Unrelated parts of a session using the same format string would reuse the same
function, avoiding redundant compilation. To avoid the proliferation of
functions, we limit the usage to only 1 argument. Practical consideration
would suggest that only dozens of functions would be created in a session, which
seems manageable.
Usage
```julia
using Formatting
fmt = "%10.3f"
s = sprintf1( fmt, 3.14159 ) # usage 1. Quite performant. Easiest to switch to.
fmtrfunc = generate_formatter( fmt ) # usage 2. This bypass repeated lookup of cached function. Most performant.
s = fmtrfunc( 3.14159 )
s = format( 3.14159, precision=3 ) # usage 3. Most flexible, with some non-printf options. Least performant.
```
### Speed
`sprintf1`: Speed penalty is about 20% for floating point and 30% for integers.
If the formatter is stored and used instead (see the example using `generate_formatter` above),
the speed penalty reduces to 10% for floating point and 15% for integers.
### Commas
This package also supplements the lack of thousand separator e.g. `"%'d"`, `"%'f"`, `"%'s"`.
Note: `"%'s"` behavior is that for small enough floating point (but not too small),
thousand separator would be used. If the number needs to be represented by `"%e"`, no
separator is used.
### Flexible `format` function
This package contains a run-time number formatter `format` function, which goes beyond
the standard `sprintf` functionality.
An example:
```julia
s = format( 1234, commas=true ) # 1,234
s = format( -1234, commas=true, parens=true ) # (1,234)
```
The keyword arguments are (Bold keywards are not printf standard)
* width. Integer. Try to fit the output into this many characters. May not be successful.
Sacrifice space first, then commas.
* precision. Integer. How many decimal places.
* leftjustified. Boolean
* zeropadding. Boolean
* commas. Boolean. Thousands-group separator.
* signed. Boolean. Always show +/- sign?
* positivespace. Boolean. Prepend an extra space for positive numbers? (so they align nicely with negative numbers)
* **parens**. Boolean. Use parenthesis instead of "-". e.g. `(1.01)` instead of `-1.01`. Useful in finance. Note that
you cannot use `signed` and `parens` option at the same time.
* **stripzeros**. Boolean. Strip trailing '0' to the right of the decimal (and to the left of 'e', if any ).
* It may strip the decimal point itself if all trailing places are zeros.
* This is true by default if precision is not given, and vice versa.
* alternative. Boolean. See `#` alternative form explanation in standard printf documentation
* conversion. length=1 string. Default is type dependent. It can be one of `aAeEfFoxX`. See standard
printf documentation.
* **mixedfraction**. Boolean. If the number is rational, format it in mixed fraction e.g. `1_1/2` instead of `3/2`
* **mixedfractionsep**. Default `_`
* **fractionsep**. Default `/`
* **fractionwidth**. Integer. Try to pad zeros to the numerator until the fractional part has this width
* **tryden**. Integer. Try to use this denominator instead of a smaller one. No-op if it'd lose precision.
* **suffix**. String. This strings will be appended to the output. Useful for units/%
* **autoscale**. Symbol, default `:none`. It could be `:metric`, `:binary`, or `:finance`.
* `:metric` implements common SI symbols for large and small numbers e.g. `M`, `k`, `μ`, `n`
* `:binary` implements common ISQ symbols for large numbers e.g. `Ti`, `Gi`, `Mi`, `Ki`
* `:finance` implements common finance/news symbols for large numbers e.g. `b` (billion), `m` (millions)
See the test script for more examples.
| Formatting | https://github.com/JuliaIO/Formatting.jl.git |
|
[
"MIT"
] | 0.1.1 | 72e4eeb7ecb32cb134bfde2b4f7686eb816cc0c7 | code | 564 | using Documenter, NoncommutativeGraphs
DocMeta.setdocmeta!(NoncommutativeGraphs, :DocTestSetup, :( using NoncommutativeGraphs; using LinearAlgebra ); recursive=true)
makedocs(
sitename="NoncommutativeGraphs.jl",
modules=[NoncommutativeGraphs],
format = Documenter.HTML(
prettyurls = get(ENV, "CI", nothing) == "true"
),
pages = [
"Home" => "index.md",
"Reference" => "reference.md",
],
)
deploydocs(
repo = "github.com/dstahlke/NoncommutativeGraphs.jl.git",
devbranch = "main",
branch = "gh-pages",
)
| NoncommutativeGraphs | https://github.com/dstahlke/NoncommutativeGraphs.jl.git |
|
[
"MIT"
] | 0.1.1 | 72e4eeb7ecb32cb134bfde2b4f7686eb816cc0c7 | code | 14931 | module NoncommutativeGraphs
import Base.==
using DocStringExtensions
using Subspaces
using Convex, SCS, LinearAlgebra
using Random, RandomMatrices
using Graphs
using Compat
using MathOptInterface
import Base.show
export AlgebraShape
export S0Graph
export create_S0_S1
export random_S0Graph, empty_S0Graph, complement, vertex_graph, forget_S0
export from_block_spaces, get_block_spaces
export block_expander
export random_S0_unitary, random_S0_density
export random_S1_unitary, random_S1_density
export Ψ
export dsw_schur, dsw_schur2
export dsw, dsw_via_complement
MOI = MathOptInterface
eye(n) = Matrix(1.0*I, (n,n))
function make_optimizer(verbose, eps)
optimizer = SCS.Optimizer()
if isdefined(MOI, :RawOptimizerAttribute) # as of MathOptInterface v0.10.0
MOI.set(optimizer, MOI.RawOptimizerAttribute("verbose"), 0)
if isdefined(SCS, :ScsSettings) && hasfield(SCS.ScsSettings, :eps_rel) # as of SCS v0.9
MOI.set(optimizer, MOI.RawOptimizerAttribute("eps_rel"), eps)
MOI.set(optimizer, MOI.RawOptimizerAttribute("eps_abs"), eps)
else
MOI.set(optimizer, MOI.RawOptimizerAttribute("eps"), eps)
end
else
MOI.set(optimizer, MOI.RawParameter("verbose"), 0)
if isdefined(SCS, :ScsSettings) && hasfield(SCS.ScsSettings, :eps_rel) # as of SCS v0.9
MOI.set(optimizer, MOI.RawParameter("eps_rel"), eps)
MOI.set(optimizer, MOI.RawParameter("eps_abs"), eps)
else
MOI.set(optimizer, MOI.RawParameter("eps"), eps)
end
end
return optimizer
end
"""
The structure of a finite dimensional C*-algebra.
- For example, `[1 2; 3 4]` corresponds to S₀ = M₁⊗I₂ ⊕ M₃⊗I₄.
- For an n-dimensional non-commutative graph use `[1 n]` for S₀ = Iₙ.
- For an n-vertex classical graph use `ones(Integer, n, 2)` for S₀ = diagonals.
"""
AlgebraShape = Array{<:Integer, 2}
"""
create_S0_S1(sig::AlgebraShape) -> Tuple{Subspace, Subspace}
Create a C*-algebra and its commutant with the given structure.
"""
function create_S0_S1(sig::AlgebraShape)
blocks0 = []
blocks1 = []
@compat for row in eachrow(sig)
if length(row) != 2
throw(ArgumentError("row length must be 2"))
end
dA = row[1]
dY = row[2]
#println("dA=$dA dY=$dY")
blk0 = kron(full_subspace(ComplexF64, (dA, dA)), Matrix((1.0+0.0im)*I, dY, dY))
blk1 = kron(Matrix((1.0+0.0im)*I, dA, dA), full_subspace(ComplexF64, (dY, dY)))
#println(blk0)
push!(blocks0, blk0)
push!(blocks1, blk1)
end
S0 = cat(blocks0..., dims=(1,2))
S1 = cat(blocks1..., dims=(1,2))
@assert I in S0
@assert I in S1
s0 = random_element(S0)
s1 = random_element(S1)
@assert (norm(s0 * s1 - s1 * s0) < 1e-9) "S0 and S1 don't commute"
return S0, S1
end
"""
S0Graph(sig::AlgebraShape, S::Subspace{ComplexF64, 2})
S0Graph(g::AbstractGraph)
Represents an S₀-graph as defined in arxiv:1002.2514.
$(TYPEDFIELDS)
"""
struct S0Graph
"""Dimension of Hilbert space A such that S ⊆ L(A)"""
n::Integer
"""Structure of C*-algebra S₀"""
sig::AlgebraShape
"""Subspace that defines the graph"""
S::Subspace{ComplexF64, 2}
"""C*-algebra S₀"""
S0::Subspace{ComplexF64, 2}
"""Commutant of C*-algebra S₀"""
S1::Subspace{ComplexF64, 2}
"""Block scaling array D from definition 23 of arxiv:2101.00162"""
D::Array{Float64, 2}
function S0Graph(sig::AlgebraShape, S::Subspace{ComplexF64, 2})
S0, S1 = create_S0_S1(sig)
S == S' || throw(DomainError("S is not an S0-graph"))
S0 in S || throw(DomainError("S is not an S0-graph"))
(S == S0 * S * S0) || throw(DomainError("S is not an S0-graph"))
n = size(S0)[1]
da_sizes = sig[:,1]
dy_sizes = sig[:,2]
n_sizes = da_sizes .* dy_sizes
D = cat([ v*eye(n) for (n, v) in zip(n_sizes, dy_sizes ./ da_sizes) ]..., dims=(1,2))
return new(n, sig, S, S0, S1, D)
end
function S0Graph(g::AbstractGraph)
n = nv(g)
S0 = Subspace([ (x=zeros(ComplexF64, n,n); x[i,i]=1; x) for i in 1:n ])
S = Subspace([ (x=zeros(ComplexF64, n,n); x[src(e),dst(e)]=1; x) for e in edges(g) ])
S = S + S' + S0
return S0Graph(ones(Int64, n, 2), S)
end
end
function show(io::IO, g::S0Graph)
print(io, "S0Graph{S0=$(g.sig) S=$(g.S)}")
end
"""
$(TYPEDSIGNATURES)
Returns the S₀-graph with S=S₀.
"""
vertex_graph(g::S0Graph) = S0Graph(g.sig, g.S0)
"""
$(TYPEDSIGNATURES)
Returns an S₀-graph with S₀=ℂI.
"""
forget_S0(g::S0Graph) = S0Graph([1 g.n], g.S)
"""
random_S0Graph(sig::AlgebraShape) -> S0Graph
Creates a random S₀-graph with S₀ having the given structure.
"""
function random_S0Graph(sig::AlgebraShape)
S0, S1 = create_S0_S1(sig)
num_blocks = size(sig, 1)
function block(col, row)
da_col, dy_col = sig[col,:]
da_row, dy_row = sig[row,:]
ds = Integer(round(dy_row * dy_col / 2.0))
F = full_subspace(ComplexF64, (da_row, da_col))
if row == col
R = random_hermitian_subspace(ComplexF64, ds, dy_row)
elseif row > col
R = random_subspace(ComplexF64, ds, (dy_row, dy_col))
else
R = empty_subspace(ComplexF64, (dy_row, dy_col))
end
return kron(F, R)
end
blocks = Array{Subspace{ComplexF64, 2}, 2}([
block(col, row)
for col in 1:num_blocks, row in 1:num_blocks
])
S = hvcat(num_blocks, blocks...)
S |= S'
S |= S0
return S0Graph(sig, S)
end
"""
empty_S0Graph(sig::AlgebraShape) -> S0Graph
Creates an empty S₀-graph (i.e. S=S₀) with S₀ having the given structure.
"""
function empty_S0Graph(sig::AlgebraShape)
S0, S1 = create_S0_S1(sig)
return S0Graph(sig, S0)
end
"""
$(TYPEDSIGNATURES)
Returns the complement graph perp(S) + S₀.
"""
complement(g::S0Graph) = S0Graph(g.sig, perp(g.S) | g.S0)
function ==(a::S0Graph, b::S0Graph)
return a.sig == b.sig && a.S == b.S
end
function get_block_spaces(g::S0Graph)
num_blocks = size(g.sig, 1)
da_sizes = g.sig[:,1]
dy_sizes = g.sig[:,2]
n_sizes = da_sizes .* dy_sizes
blkspaces = Array{Subspace{ComplexF64}, 2}(undef, num_blocks, num_blocks)
offseti = 0
for blki in 1:num_blocks
offsetj = 0
for blkj in 1:num_blocks
#@show [blki, blkj, offseti, offsetj]
blkbasis = Array{Array{ComplexF64, 2}, 1}()
for m in each_basis_element(g.S)
blk = m[1+offseti:dy_sizes[blki]+offseti, 1+offsetj:dy_sizes[blkj]+offsetj]
push!(blkbasis, blk)
end
blkspaces[blki, blkj] = Subspace(blkbasis)
#println(blkspaces[blki, blkj])
offsetj += n_sizes[blkj]
end
@assert offsetj == size(g.S)[2]
offseti += n_sizes[blki]
end
@assert offseti == size(g.S)[1]
return blkspaces
end
function from_block_spaces(sig::AlgebraShape, blkspaces::Array{Subspace{ComplexF64}, 2})
S0, S1 = NoncommutativeGraphs.create_S0_S1(sig)
num_blocks = size(sig, 1)
function block(col, row)
da_col, dy_col = sig[col,:]
da_row, dy_row = sig[row,:]
ds = Integer(round(sqrt(dy_row * dy_col) / 2.0))
F = full_subspace(ComplexF64, (da_row, da_col))
kron(F, blkspaces[row, col])
end
blocks = [
block(col, row)
for col in 1:num_blocks, row in 1:num_blocks
]
S = hvcat(num_blocks, blocks...)
S |= S'
S |= S0
return S0Graph(sig, S)
end
function block_expander(sig::AlgebraShape)
function basis_mat(n, i)
M = zeros(n*n)
M[i] = 1
return reshape(M, (n, n))
end
n = sum(prod(sig, dims=2))
@compat J = cat([
cat([ kron(eye(dA), basis_mat(dY, i)) for i in 1:dY^2 ]..., dims=3)
for (dA, dY) in eachrow(sig)
]..., dims=(1,2,3))
return reshape(J, (n^2, size(J)[3]))
end
function random_positive(n)
U = rand(Haar(2), n)
return Hermitian(U' * Diagonal(rand(n)) * U)
end
"""
Returns a random unitary in S₀.
"""
function random_S0_unitary(sig::AlgebraShape)
@compat return cat([
kron(rand(Haar(2), dA), eye(dY))
for (dA, dY) in eachrow(sig)
]..., dims=(1,2))
end
"""
Returns a random density operator in S₀.
"""
function random_S0_density(sig::AlgebraShape)
@compat ρ = cat([
kron(random_positive(dA), eye(dY))
for (dA, dY) in eachrow(sig)
]..., dims=(1,2))
return ρ / tr(ρ)
end
"""
Returns a random unitary in the commutant of S₀.
"""
function random_S1_unitary(sig::AlgebraShape)
@compat return cat([
kron(eye(dA), rand(Haar(2), dY))
for (dA, dY) in eachrow(sig)
]..., dims=(1,2))
end
"""
Returns a random density operator in the commutant of S₀.
"""
function random_S1_density(sig::AlgebraShape)
@compat ρ = cat([
kron(eye(dA), random_positive(dY))
for (dA, dY) in eachrow(sig)
]..., dims=(1,2))
return ρ / tr(ρ)
end
###############
### DSW solvers
###############
# FIXME Convex.jl doesn't support cat(args..., dims=(1,2)). It should be added.
function diagcat(args::Convex.AbstractExprOrValue...)
num_blocks = size(args, 1)
return vcat([
hcat([
row == col ? args[row] : zeros(size(args[row], 1), size(args[col], 2))
for col in 1:num_blocks
]...)
for row in 1:num_blocks
]...)
end
"""
$(TYPEDSIGNATURES)
Block scaling superoperator Ψ from definition 23 of arxiv:2101.00162
"""
function Ψ(g::S0Graph, w::Union{AbstractArray{<:Number, 2}, Variable})
n = g.n
da_sizes = g.sig[:,1]
dy_sizes = g.sig[:,2]
n_sizes = da_sizes .* dy_sizes
num_blocks = size(g.sig)[1]
k = 0
blocks = []
for (dai, dyi) in zip(da_sizes, dy_sizes)
ni = dai * dyi
TrAi = partialtrace(w[k+1:k+ni, k+1:k+ni], 1, [dai; dyi])
blk = dyi^-1 * kron(Array(1.0*I, dai, dai), TrAi)
k += ni
push!(blocks, blk)
end
@assert k == n
out = diagcat(blocks...)
@assert size(out) == (n, n)
return out
end
"""
$(TYPEDSIGNATURES)
Schur complement form of weighted θ from theorem 14 of arxiv:2101.00162.
Returns λ, w, and Z variables (for Convex.jl) in a named tuple.
See also: [`dsw_schur2`](@ref).
"""
function dsw_schur(g::S0Graph)::NamedTuple{(:λ, :w, :Z), Tuple{Convex.AbstractExpr, Convex.AbstractExpr, Convex.AbstractExpr}}
n = size(g.S)[1]
Z = sum(kron(m, ComplexVariable(n, n)) for m in hermitian_basis(g.S))
# slow:
#Z = ComplexVariable(n^2, n^2)
#add_constraint!(Z, Z in kron(g.S, full_subspace(ComplexF64, (n, n))))
# slow:
#SB = kron(g.S, full_subspace(ComplexF64, (n, n)))
#Z = variable_in_space(SB)
λ = Variable()
wt = partialtrace(Z, 1, [n; n])
wv = reshape(wt, n*n, 1)
add_constraint!(λ, [ λ wv' ; wv Z ] ⪰ 0)
return (λ=λ, w=transpose(wt), Z=Z)
end
"""
$(TYPEDSIGNATURES)
Schur complement form of weighted θ from theorem 14 of arxiv:2101.00162, optimized for the
case S₀ ≠ ℂI, at the cost of w being constrained to S₁ (the commutant of S₀).
Returns λ, w, and Z variables (for Convex.jl) in a named tuple.
See also: [`dsw_schur2`](@ref).
"""
function dsw_schur2(g::S0Graph)::NamedTuple{(:λ, :w, :Z), Tuple{Convex.AbstractExpr, Convex.AbstractExpr, Convex.AbstractExpr}}
da_sizes = g.sig[:,1]
dy_sizes = g.sig[:,2]
num_blocks = size(g.sig)[1]
blkspaces = get_block_spaces(g)
w_blocks = Array{Convex.AbstractExpr, 1}(undef, num_blocks)
Z_blocks = Array{Convex.AbstractExpr, 2}(undef, num_blocks, num_blocks)
for blki in 1:num_blocks
for blkj in 1:num_blocks
if blkj <= blki
if dim(blkspaces[blki, blkj]) == 0
Z_blocks[blki, blkj] = zeros(dy_sizes[blki]^2, dy_sizes[blkj]^2)
else
blkV = sum(kron(m, ComplexVariable(dy_sizes[blki], dy_sizes[blkj]))
for m in each_basis_element(blkspaces[blki, blkj]))
Z_blocks[blki, blkj] = blkV
# slow:
#SB = kron(blkspaces[blki, blkj], full_subspace(ComplexF64, (dy_sizes[blki], dy_sizes[blkj])))
#Z_blocks[blki, blkj] = variable_in_space(SB)
end
end
if blkj == blki
w_blocks[blki] = partialtrace(Z_blocks[blki, blkj], 1, [dy_sizes[blki], dy_sizes[blkj]])
end
end
end
for blki in 1:num_blocks
for blkj in 1:num_blocks
if blkj > blki
Z_blocks[blki, blkj] = Z_blocks[blkj, blki]'
end
#@show blki, blkj
#@show size(Z_blocks[blki, blkj])
end
end
#@show [ size(wi) for wi in w_blocks ]
λ = Variable()
Z = vcat([ hcat([Z_blocks[i,j] for j in 1:num_blocks]...) for i in 1:num_blocks ]...)
wv = vcat([ reshape(wi, dy_sizes[i]^2, 1) for (i,wi) in enumerate(w_blocks) ]...)
#@show size(wv)
#@show size(Z)
add_constraint!(λ, [ λ wv' ; wv Z ] ⪰ 0)
wt = diagcat([ kron(eye(da_sizes[i]), wi) for (i,wi) in enumerate(w_blocks) ]...)
#@show size(wt)
return (λ=λ, w=transpose(wt), Z=Z)
end
"""
$(TYPEDSIGNATURES)
Compute weighted θ using theorem 14 of arxiv:2101.00162.
Returns optimal λ, x, and Z values in a named tuple.
If `use_diag_optimization=true` (the default) then `x ⪰ w` and `x` is in the commutant
of S₀. By theorem 29 of arxiv:2101.00162, θ(g, w) = θ(g, x).
"""
function dsw(g::S0Graph, w::AbstractArray{<:Number, 2}; use_diag_optimization=true, eps=1e-6, verbose=0)
if use_diag_optimization
λ, x, Z = dsw_schur2(g)
else
λ, x, Z = dsw_schur(g)
end
problem = minimize(λ, [x ⪰ w])
solve!(problem, () -> make_optimizer(verbose, eps))
return (λ=problem.optval, x=Hermitian(evaluate(x)), Z=Hermitian(evaluate(Z)))
end
"""
$(TYPEDSIGNATURES)
Compute weighted θ via the complement graph, using theorem 29 of arxiv:2101.00162.
θ(S, w) = max{ tr(w x) : x ⪰ 0, y = Ψ(x), θ(Sᶜ, y) ≤ 1 }
Returns optimal λ, x, y, and Z in a named tuple.
If w is in the commutant of S₀ then the weights w and y saturate the inequality in
theorem 32 of arxiv:2101.00162.
"""
function dsw_via_complement(g::S0Graph, w::AbstractArray{<:Number, 2}; use_diag_optimization=true, eps=1e-6, verbose=0)
# max{ <w,x> : Ψ(S, x) ⪯ y, ϑ(S, y) ≤ 1, y ∈ S1 }
# equal to:
# max{ dsw(S0, √y * w * √y) : dsw(complement(S), y) <= 1 }
if use_diag_optimization
λ, y, Z = dsw_schur2(g)
else
λ, y, Z = dsw_schur(g)
end
x = HermitianSemidefinite(g.n, g.n)
problem = maximize(real(tr(w * x')), [ λ <= 1, Ψ(g, x) == y ])
solve!(problem, () -> make_optimizer(verbose, eps))
return (λ=problem.optval, x=Hermitian(evaluate(x)), y=Hermitian(evaluate(y)), Z=Hermitian(evaluate(Z)))
end
end
| NoncommutativeGraphs | https://github.com/dstahlke/NoncommutativeGraphs.jl.git |
|
[
"MIT"
] | 0.1.1 | 72e4eeb7ecb32cb134bfde2b4f7686eb816cc0c7 | code | 1627 | function f(g, w, v)
n = size(w, 1)
q = HermitianSemidefinite(n)
problem = maximize(real(tr(w * q')), [ Ψ(g, q) ⪯ v ])
solve!(problem, () -> make_optimizer(0, solver_eps))
return problem.optval
end
function g(g, w)
D = g.D
#z = HermitianSemidefinite(g.n)
#problem = minimize(real(tr(D * z)), [ z ⪰ w, z in g.S1 ])
#problem = minimize(real(tr(z)), [ z ⪰ √D * w * √D, z in g.S1 ])
z = variable_in_space(g.S1)
problem = minimize(real(tr(D * z)), [ z ⪰ w ])
#problem = minimize(real(tr(z)), [ z ⪰ √D * w * √D ])
solve!(problem, () -> make_optimizer(0, solver_eps))
return problem.optval
end
#function h(g, w, y)
# D = g.D
#
# z = variable_in_space(g.S1)
# problem = minimize(real(tr(y * √D * z * √D)), [ z ⪰ w ])
# #problem = minimize(real(tr(D * √y * z * √y)), [ z ⪰ w ])
# #problem = minimize(real(tr(z)), [ z ⪰ √D * √y * w * √y * √D ])
#
# solve!(problem, () -> make_optimizer(0, solver_eps))
# return problem.optval
#end
Random.seed!(0)
#sig = [1 2; 2 2]
#sig = [1 1; 2 3]
#sig = [2 3]
#sig = [2 2]
sig = [3 2; 2 3]
S = random_S0Graph(sig)
T = complement(S)
w = random_bounded(S.n)
@time opt0 = dsw(S, w, eps=solver_eps)[1]
@time opt1, x, y = dsw_via_complement(T, w, eps=solver_eps)
@test opt1 ≈ opt0 atol=tol
@time opt2 = dsw(T, y, eps=solver_eps)[1]
@test opt2 ≈ 1 atol=tol
@time opt3 = dsw(vertex_graph(S), √y * w * √y, eps=solver_eps)[1]
@test opt3 ≈ opt0 atol=tol
# ϑ(S, w) = max{ ϑ(S0, √y * w * √y) / ϑ(T, y) : y ∈ S1 }
@test opt0 ≈ opt3 / opt2 atol=tol
@test f(S, w, y) ≈ opt3 atol=tol
@test g(S, √y * w * √y) ≈ opt3 atol=tol
| NoncommutativeGraphs | https://github.com/dstahlke/NoncommutativeGraphs.jl.git |
|
[
"MIT"
] | 0.1.1 | 72e4eeb7ecb32cb134bfde2b4f7686eb816cc0c7 | code | 492 | # solver returns ALMOST_OPTIMAL when seed=0
Random.seed!(1)
#sig = [1 2; 2 2]
#sig = [1 1; 2 3]
#sig = [2 3]
#sig = [2 2]
sig = [3 2; 2 3]
S = random_S0Graph(sig)
T = complement(S)
w = random_element(S.S1)
w = w*w'
@test w in S.S1
@time opt0 = dsw(S, w, eps=solver_eps)[1]
@time opt1, x, y = dsw_via_complement(T, w, eps=solver_eps)
@test opt1 ≈ opt0 atol=tol
@time opt2 = dsw(T, y, eps=solver_eps)[1]
@test opt2 ≈ 1 atol=tol
@test real(tr(w * √S.D * y * √S.D)) ≈ opt0 * opt2 atol=tol
| NoncommutativeGraphs | https://github.com/dstahlke/NoncommutativeGraphs.jl.git |
|
[
"MIT"
] | 0.1.1 | 72e4eeb7ecb32cb134bfde2b4f7686eb816cc0c7 | code | 161 | using Graphs: cycle_graph
n = 7
G = cycle_graph(n)
S = S0Graph(G)
@time λ = dsw(S, eye(n), eps=solver_eps)[1]
@test λ ≈ n*cos(pi/n) / (1 + cos(pi/n)) atol=tol
| NoncommutativeGraphs | https://github.com/dstahlke/NoncommutativeGraphs.jl.git |
|
[
"MIT"
] | 0.1.1 | 72e4eeb7ecb32cb134bfde2b4f7686eb816cc0c7 | code | 1348 | Random.seed!(0)
#sig = [1 2; 2 2]
#sig = [1 1; 2 3]
#sig = [2 3]
#sig = [2 2]
sig = [3 2; 2 3]
S = random_S0Graph(sig)
T = complement(S)
n = S.n
D = S.D
J = block_expander(S.sig)
w = random_bounded(S.n)
@time opt0, x1, Z1 = dsw(S, w, eps=solver_eps)
if true
@time opt1, _, x2, Z2 = dsw_via_complement(T, w, eps=solver_eps)
@test opt1 ≈ opt0 atol=tol
else
@time opt1, _, y = dsw_via_complement(T, w, eps=solver_eps)
@test opt1 ≈ opt0 atol=tol
@time opt2, x2, Z2 = dsw(T, y, eps=solver_eps)
@test opt2 ≈ 1 atol=tol
end
x1 /= opt0
Z1 /= opt0
Z1 = J * Z1 * J'
Z2 = J * Z2 * J'
@test Z1 ≈ Z1'
@test Z2 ≈ Z2'
Z1 = Hermitian(Z1)
Z2 = Hermitian(Z2)
@test tr(√D * x1 * √D * x2) ≈ 1 atol=tol
@test partialtrace(Z1, 1, [n,n]) ≈ transpose(x1) atol=tol
@test partialtrace(Z2, 1, [n,n]) ≈ transpose(x2) atol=tol
v1 = reshape(conj(x1), n^2)
v2 = reshape(conj(x2), n^2)
Q1 = [1 v1'; v1 Z1]
Q2 = [1 v2'; v2 Z2]
@test Q1 ≈ Q1'
@test Q2 ≈ Q2'
Q1 = Hermitian(Q1)
Q2 = Hermitian(Q2)
D2 = cat([ 1, -kron(√D, √D) ]..., dims=(1,2))
@test minimum(eigvals(Q1)) > -tol
@test minimum(eigvals(Q2)) > -tol
@test tr(Q1 * D2 * Q2 * D2) ≈ 0 atol=tol
@test Z1 * kron(√D, √D) * v2 ≈ v1 atol=tol
@test Z2 * kron(√D, √D) * v1 ≈ v2 atol=tol
@test Z1 * kron(√D, √D) * Z2 ≈ v1 * v2' atol=tol
@test Z1 * kron(eye(n), D) * Z2 ≈ v1 * v2' atol=tol
| NoncommutativeGraphs | https://github.com/dstahlke/NoncommutativeGraphs.jl.git |
|
[
"MIT"
] | 0.1.1 | 72e4eeb7ecb32cb134bfde2b4f7686eb816cc0c7 | code | 290 | Random.seed!(0)
#sig = [1 2; 2 2]
#sig = [1 1; 2 3]
sig = [2 3]
#sig = [2 2]
#sig = [3 2; 2 3]
S = random_S0Graph(sig)
w = random_bounded(S.n)
@time opt1, x1 = dsw(S, w, eps=solver_eps)
@time opt2, x2 = dsw(S, w, use_diag_optimization=false, eps=solver_eps)
@test opt1 ≈ opt2 atol=tol
| NoncommutativeGraphs | https://github.com/dstahlke/NoncommutativeGraphs.jl.git |
|
[
"MIT"
] | 0.1.1 | 72e4eeb7ecb32cb134bfde2b4f7686eb816cc0c7 | code | 353 | Random.seed!(0)
n = 4
diags = Subspace([ Array{ComplexF64, 2}(basis_vec((n,n), (i,i))) for i in 1:n ])
S = S0Graph([1 n], diags)
w = random_bounded(n)
λ = dsw(S, w).λ
X = HermitianSemidefinite(n)
problem = maximize(real(tr(X * w)), [ X[i,i] == 1 for i in 1:n ])
solve!(problem, () -> make_optimizer(0, solver_eps))
@test λ ≈ problem.optval atol=tol
| NoncommutativeGraphs | https://github.com/dstahlke/NoncommutativeGraphs.jl.git |
|
[
"MIT"
] | 0.1.1 | 72e4eeb7ecb32cb134bfde2b4f7686eb816cc0c7 | code | 612 | sig = [2 3; 3 2; 1 1]
S = random_S0Graph(sig)
w = random_S0_density(S.sig)
w /= tr(w)
#@show S
#display(w)
λ, x1, Z = dsw_schur2(S)
problem = maximize(trace_logm(x1, w), [ λ <= 1 ])
@time solve!(problem, () -> make_optimizer(0, solver_eps))
h1=problem.optval
x1=Hermitian(evaluate(x1))
λ, x2, Z = dsw_schur2(complement(S))
problem = maximize(trace_logm(x2, w), [ λ <= 1 ])
@time solve!(problem, () -> make_optimizer(0, solver_eps))
h2=problem.optval
x2=Hermitian(evaluate(x2))
@test x1*x2 ≈ x2*x1 atol=tol
@test x1*x2 ≈ Ψ(S, w) atol=tol
# follows from the above
@test h1+h2 ≈ tr(w*log(Ψ(S, w))) atol=tol
| NoncommutativeGraphs | https://github.com/dstahlke/NoncommutativeGraphs.jl.git |
|
[
"MIT"
] | 0.1.1 | 72e4eeb7ecb32cb134bfde2b4f7686eb816cc0c7 | code | 880 | module NoncommutativeGraphsTesting
include("test_header.jl")
@testset "Classical graph" begin
include("classical_graph.jl")
end
@testset "Simple duality" begin
include("simple_duality.jl")
end
@testset "Block duality" begin
include("block_duality.jl")
end
@testset "Block duality 2" begin
include("block_duality2.jl")
end
# slow and doesn't meet accuracy tolerance
if false
@testset "Thin diag" begin
include("thin_diag.jl")
end
end
@testset "Diag optimization" begin
include("diag_optimization.jl")
end
@testset "Unitary transform" begin
include("unitary_transform.jl")
end
@testset "Compatible matrices" begin
include("compatible_matrices.jl")
end
@testset "Empty classical graph" begin
include("empty_classical.jl")
end
@testset "Entropy splitting" begin
include("entropy_splitting.jl")
end
end # NoncommutativeGraphsTesting
| NoncommutativeGraphs | https://github.com/dstahlke/NoncommutativeGraphs.jl.git |
|
[
"MIT"
] | 0.1.1 | 72e4eeb7ecb32cb134bfde2b4f7686eb816cc0c7 | code | 304 | Random.seed!(0)
#sig = [1 2; 2 2]
#sig = [1 1; 2 3]
#sig = [2 3]
#sig = [2 2]
sig = [3 2; 2 3]
S = random_S0Graph(sig)
T = complement(S)
w = random_bounded(S.n)
@time opt1 = dsw(S, w, eps=solver_eps)[1]
@time opt2 = dsw_via_complement(complement(S), w, eps=solver_eps)[1]
@test opt1 ≈ opt2 atol=tol
| NoncommutativeGraphs | https://github.com/dstahlke/NoncommutativeGraphs.jl.git |
|
[
"MIT"
] | 0.1.1 | 72e4eeb7ecb32cb134bfde2b4f7686eb816cc0c7 | code | 431 | using NoncommutativeGraphs, Subspaces
using Convex, SCS, LinearAlgebra
using Random, RandomMatrices
using Test
function random_bounded(n)
U = rand(Haar(2), n)
return Hermitian(U' * Diagonal(rand(n)) * U)
end
function basis_vec(dims::Tuple, idx::Tuple)
m = zeros(dims)
m[idx...] = 1
return m
end
eye(n) = Matrix(1.0*I, (n,n))
solver_eps = 1e-8
tol = 1e-6
make_optimizer = NoncommutativeGraphs.make_optimizer
| NoncommutativeGraphs | https://github.com/dstahlke/NoncommutativeGraphs.jl.git |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.