licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 0.2.12 | e4a10b7cdb7ec836850e43a4cee196f4e7b02756 | docs | 700 | # PkgBenchmarks
PkgBenchmark provides an interface for Julia package developers to track performance changes of their packages.
The package contains the following features
* Running the benchmark suite at a specified commit, branch or tag. The path to the julia executable, the command line flags, and the environment variables can be customized.
* Comparing performance of a package between different package commits, branches or tags.
* Exporting results to markdown for benchmarks and comparisons, similar to how Nanosoldier reports results for the benchmarks in Base Julia.
## Installation
PkgBenchmark is registered so installation is done by running `import Pkg; Pkg.add("PkgBenchmark")`.
| PkgBenchmark | https://github.com/JuliaCI/PkgBenchmark.jl.git |
|
[
"MIT"
] | 0.2.12 | e4a10b7cdb7ec836850e43a4cee196f4e7b02756 | docs | 2317 | # Running a benchmark suite
```@meta
DocTestSetup = quote
using PkgBenchmark
end
```
Use `benchmarkpkg` to run benchmarks defined in a suite as defined in the previous section.
```@docs
benchmarkpkg
```
The results of a benchmark is returned as a `BenchmarkResult`:
```@docs
PkgBenchmark.BenchmarkResults
```
## More advanced customization
Instead of passing a commit, branch etc. as a `String` to `benchmarkpkg`, a [`BenchmarkConfig`](@ref) can be passed
```@docs
PkgBenchmark.BenchmarkConfig
```
This object contains the package commit, julia command, and what environment variables will
be used when benchmarking. The default values can be seen by using the default constructor
```julia-repl
julia> BenchmarkConfig()
BenchmarkConfig:
id: nothing
juliacmd: `/home/user/julia/julia`
env:
```
The `id` is a commit, branch etc as described in the previous section. An `id` with value `nothing` means that the current state of the package will be benchmarked.
The default value of `juliacmd` is `joinpath(Sys.BINDIR, Base.julia_exename()` which is the command to run the julia executable without any command line arguments.
To instead benchmark the branch `PR`, using the julia command `julia -O3`
with the environment variable `JULIA_NUM_THREADS` set to `4`, the config would be created as
```jldoctest
julia> config = BenchmarkConfig(id = "PR",
juliacmd = `julia -O3`,
env = Dict("JULIA_NUM_THREADS" => 4))
BenchmarkConfig:
id: "PR"
juliacmd: `julia -O3`
env: JULIA_NUM_THREADS => 4
```
To benchmark the package with the config, call [`benchmarkpkg`](@ref) as e.g.
```julia
benchmark("Tensors", config)
```
!!! info
The `id` keyword to the `BenchmarkConfig` does not have to be a branch, it can be most things that git can understand, for example a commit id
or a tag.
Benchmarks can be saved and read using `writeresults` and ``readresults` respectively:
```@docs
PkgBenchmark.readresults
PkgBenchmark.writeresults
```
## CI Integration
Tracking changes in performance throughout the development of a package can be automated as part of package CI.
[BenchmarkCI.jl](https://github.com/tkf/BenchmarkCI.jl) provides a convenient way to run a PkgBenchmark suite as part of a package's CI actions.
| PkgBenchmark | https://github.com/JuliaCI/PkgBenchmark.jl.git |
|
[
"MIT"
] | 0.1.10 | 406bb881a8d4ac57347dc2b50294b059db2750e6 | code | 576 | using Documenter
using RetentionParameterEstimator
makedocs(
sitename = "RetentionParameterEstimator",
#format = Documenter.HTML(),
#modules = [RetentionData]
pages = Any[
"Home" => "index.md",
"Docstrings" => "docstrings.md"
]
)
# Documenter can also automatically deploy documentation to gh-pages.
# See "Hosting Documentation" and deploydocs() in the Documenter manual
# for more information.
deploydocs(
repo = "github.com/GasChromatographyToolbox/RetentionParameterEstimator.jl",
devbranch = "main"
)
| RetentionParameterEstimator | https://github.com/GasChromatographyToolbox/RetentionParameterEstimator.jl.git |
|
[
"MIT"
] | 0.1.10 | 406bb881a8d4ac57347dc2b50294b059db2750e6 | code | 15819 | ### A Pluto.jl notebook ###
# v0.19.41
using Markdown
using InteractiveUtils
# This Pluto notebook uses @bind for interactivity. When running this notebook outside of Pluto, the following 'mock version' of @bind gives bound variables a default value (instead of an error).
macro bind(def, element)
quote
local iv = try Base.loaded_modules[Base.PkgId(Base.UUID("6e696c72-6542-2067-7265-42206c756150"), "AbstractPlutoDingetjes")].Bonds.initial_value catch; b -> missing; end
local el = $(esc(element))
global $(esc(def)) = Core.applicable(Base.get, el) ? Base.get(el) : iv(el)
el
end
end
# ╔═╡ 09422105-a747-40ac-9666-591326850d8f
begin
# online version
import Pkg
version = "0.1.8"
Pkg.activate(mktempdir())
Pkg.add([
Pkg.PackageSpec(name="RetentionParameterEstimator", version=version)
])
using RetentionParameterEstimator
md"""
online, Packages, estimate\_retention\_parameters, for RetentionParameterEstimator v$(version)
"""
#= # local version (database is still downloaded from github)
import Pkg
# activate the shared project environment
Pkg.activate(Base.current_project())
using RetentionParameterEstimator
#Pkg.add([
# Pkg.PackageSpec(name="RetentionParameterEstimator", rev="minor_fix_jsb")
#])
md"""
local, Packages, estimate\_retention\_parameters.jl, for RetentionParameterEstimator v0.1.6
"""
=#
end
# ╔═╡ eb5fc23c-2151-47fa-b56c-5771a4f8b9c5
begin
html"""<style>
main {
max-width: 75%;
margin-left: 1%;
margin-right: 20% !important;
}
"""
end
# ╔═╡ f46b165e-67d9-402f-a225-72d1082007be
TableOfContents()
# ╔═╡ 6d4ec54b-01b2-4208-9b9e-fcb70d236c3e
md"""
#
$(Resource("https://raw.githubusercontent.com/JanLeppert/RetentionParameterEstimator.jl/main/docs/src/assets/logo_b.svg"))
Estimation of K-centric retention parameters by temperature programmed GC with different temperature programs.
"""
# ╔═╡ ebc2a807-4413-4721-930a-6328ae72a1a9
md"""
## Select measurement data
Measured chromatograms used to **estimate** the parameters by optimization:
Load own data: $(@bind own_data CheckBox(default=false))
"""
# ╔═╡ 51a22a15-24f9-4280-9c91-32e48727003a
if own_data == true
md"""
$(@bind file_meas FilePicker([MIME("text/csv")]))
"""
else
file_meas = RetentionParameterEstimator.download_data("https://raw.githubusercontent.com/JanLeppert/RetentionParameterEstimator.jl/main/data/meas_df05_Rxi5SilMS.csv");
end
# ╔═╡ eb14e619-82d4-49ac-ab2a-28e56230dbc6
begin
if isnothing(file_meas)
md"""
Selected chromatograms for **estimation**: _nothing_
Please select a file of chromatograms for **estimation**!
"""
else
meas = RetentionParameterEstimator.load_chromatograms(file_meas);
md"""
Selected chromatograms for **estimation**: $(file_meas["name"])
"""
end
end
# ╔═╡ d745c22b-1c96-4a96-83da-abb1df91ab87
begin
if !isnothing(file_meas)
md"""
Select measurements:
$(@bind selected_measurements confirm(MultiSelect(meas[3].measurement; default=meas[3].measurement)))
Select solutes:
$(@bind selected_solutes confirm(MultiSelect(meas[4]; default=meas[4])))
"""
end
end
# ╔═╡ b2c254a2-a5d6-4f18-803a-75d048fc7cdf
meas_select = RetentionParameterEstimator.filter_selected_measurements(meas, selected_measurements, selected_solutes);
# ╔═╡ f3ffd4ce-a378-4033-88e9-bc1fb8cc4bbe
md"""
## Select mode
* `m1` ... estimate the three retention parameters (``T_{char}``, ``θ_{char}`` and ``ΔC_p``).
* `m1a` ... estimate the three retention parameters (``T_{char}``, ``θ_{char}`` and ``ΔC_p``) and select ``L`` and ``d``.
* `m2` ... estimate the three retention parameters (``T_{char}``, ``θ_{char}`` and ``ΔC_p``) and the column diameter ``d``.
$(@bind select_mode confirm(Select(["m1", "m1a", "m2"])))
"""
# ╔═╡ e98f4b1b-e577-40d0-a7d8-71c53d99ee1b
if select_mode == "m1a"
#md"""
#Column parameters:
#L in m: $(@bind L_input NumberField(0.0:0.01:1000.0; default=meas[1].L))
#d in mm: $(@bind d_input NumberField(0.0:0.0001:1.0; default=meas[1].d*1000.0))
@bind col_input confirm(
PlutoUI.combine() do Child
md"""
## Column dimensions
L in m: $(Child("L", NumberField(0.0:0.01:1000.0; default=meas_select[1].L)))
d in mm: $(Child("d", NumberField(0.0:0.0001:1.0; default=meas_select[1].d*1000.0)))
"""
end
)
end
# ╔═╡ 3b40b0b1-7007-48c7-b47b-dbeaf501b73d
begin
min_th = 0.1
loss_th = 1.0
if select_mode == "m1"
check, msg, df_flag, index_flag, res, Telu_max = RetentionParameterEstimator.check_measurement(meas_select, (L = meas_select[1].L, d = meas_select[1].d*1000); min_th=min_th, loss_th=loss_th, se_col=false)
md"""
## Results
L/d ratio: $(meas_select[1].L/(meas_select[1].d))
$(res)
"""
elseif select_mode == "m1a"
check, msg, df_flag, index_flag, res, Telu_max = RetentionParameterEstimator.check_measurement(meas_select, col_input; min_th=min_th, loss_th=loss_th, se_col=false)
md"""
## Results
L/d ratio: $(col_input[1]/(col_input[2]*1e-3))
$(res)
"""
elseif select_mode == "m2"
res, Telu_max = RetentionParameterEstimator.method_m2(meas_select; se_col=false)
check = true
md"""
## Results
d = $(1000.0 * res.d[1]) mm
L/d ratio: $(meas_select[1].L./(res.d[1]))
$(res)
"""
end
end
# d = $(1000.0 * (res.d[1] ± res.d_std[1])) mm
# ╔═╡ 8cc151a6-a60a-4aba-a813-1a142a073948
begin
if check == false
md"""
## Results
!!! warning
$(msg)
The found minima for solutes $(meas[4][index_flag]) is above the threshold $(min_th) s².
$(df_flag)
Check for the plausibility of measurements, if a solute is listed there, all retention times for all used measurements should be checked, not only the listed flagged ones. An error in one retention time, e.g. typo, could produce deviations from the model in other (correct) measurements.
$(res)
"""
end
end
# ╔═╡ 0f4c35c4-32f7-4d11-874d-1f23daad7da8
begin
head = ["file:", file_meas["name"],
"selected measurements:", join(selected_measurements, " "),
"mode:", select_mode
]
res_export = RetentionParameterEstimator.separate_error_columns(res)
io = IOBuffer()
CSV.write(io, DataFrame[], header=head)
CSV.write(io, res_export, append=true, writeheader=true)
#export_str_ = export_str(opt_values, col_values, prog_values, peaklist)
md"""
## Export Results
Filename: $(@bind result_filename TextField((20,1); default="Result.csv"))
"""
end
# ╔═╡ b4f17579-9994-46e1-a3d0-6030650f0dbe
md"""
$(DownloadButton(io.data, result_filename))
"""
# ╔═╡ 0d61fd05-c0c6-4764-9f96-3b867a456ad3
md"""
## Select verification data
Measured chromatograms used to **verify** the estimated parameters:
"""
# ╔═╡ 38d1e196-f375-48ac-bc11-80b10472c1cd
if own_data == true
md"""
$(@bind file_comp FilePicker([MIME("text/csv")]))
"""
else
file_comp = RetentionParameterEstimator.download_data("https://raw.githubusercontent.com/JanLeppert/RetentionParameterEstimator.jl/main/data/comp_df05_Rxi5SilMS.csv");
end
# ╔═╡ 762c877d-3f41-49de-a7ea-0eef1456ac11
begin
if isnothing(file_comp)
md"""
Selected chromatograms for **verification**: _nothing_
Please select a file of chromatograms for **verification**!
"""
else
if file_meas == file_comp
comp = RetentionParameterEstimator.load_chromatograms(file_comp);
md"""
Selected chromatograms for **verification**: $(file_comp["name"])
_**Attention**_: The selected data for estimation and verification is the same!
"""
else
comp = RetentionParameterEstimator.load_chromatograms(file_comp);
md"""
Selected chromatograms for **verification**: $(file_comp["name"])
"""
end
end
end
# ╔═╡ 07a7e45a-d73e-4a83-9323-700d3e2a88cc
begin
if !isnothing(file_comp)
md"""
Select comparison measurements:
$(@bind selected_comparison confirm(MultiSelect(comp[3].measurement; default=comp[3].measurement)))
"""
end
end
# ╔═╡ ae424251-f4f7-48aa-a72b-3be85a193705
md"""
## Verification
Using the estimated parameters (``T_{char}``, ``θ_{char}``, ``ΔC_p``) resp. (``T_{char}``, ``θ_{char}``, ``ΔC_p``, ``d``) to simulate other temperature programs. Compare the simulated retention times with measured retention times.
"""
# ╔═╡ 116ccf37-4a18-44ac-ae6e-98932901a8b0
begin
comp_select = RetentionParameterEstimator.filter_selected_measurements(comp, selected_comparison, selected_solutes)
pl, loss, par = RetentionParameterEstimator.comparison(res, comp_select)
meanΔtR = Array{Float64}(undef, length(pl))
meanrelΔtR = Array{Float64}(undef, length(pl))
for i=1:length(pl)
meanΔtR[i] = mean(abs.(pl[i].ΔtR))
meanrelΔtR[i] = mean(abs.(pl[i].relΔtR))
end
df_loss = DataFrame(comp=selected_comparison, loss=loss, sqrt_loss=sqrt.(loss), mean_ΔtR=meanΔtR, mean_relΔtR_percent=meanrelΔtR.*100.0)
end
# ╔═╡ 157f24e6-1664-41c8-9079-b9dd0a2c98a9
md"""
Peaklists of the simulated verification programs, including difference to measured retention times.
$(embed_display(pl))
"""
# ╔═╡ f6a55039-8c32-4d21-8119-fc48e3bf0dc2
md"""
**Options for plot**
add labels: $(@bind labels CheckBox(default=true))
"""
# ╔═╡ e16e8b29-0f85-43b6-a405-6ab60b0e3ee0
begin
if labels==true
label_db = DataFrame(Name=comp[4])
md"""
$(embed_display(RetentionParameterEstimator.plot_chromatogram_comparison(pl, meas_select, comp_select; annotation=labels)))
* Simulated chromatograms in **blue**.
* Measured retention times in **orange**.
* Labels: $(embed_display(label_db))
"""
else
md"""
$(embed_display(RetentionParameterEstimator.plot_chromatogram_comparison(pl, meas_select, comp_select; annotation=labels)))
* Simulated chromatograms in **blue**.
* Measured retention times in **orange**.
"""
end
end
# ╔═╡ f83b26e7-f16d-49ac-ab3b-230533ac9c82
md"""
## Alternative parameters
Select file with alternative parameters:
"""
# ╔═╡ 62c014d5-5e57-49d7-98f8-dace2f1aaa32
if own_data == true
md"""
$(@bind file_db FilePicker([MIME("text/csv")]))
"""
else
file_db = RetentionParameterEstimator.download_data("https://raw.githubusercontent.com/JanLeppert/RetentionParameterEstimator.jl/main/data/database_Rxi5SilMS_beta125.csv");
end
# ╔═╡ 5123aa1b-b899-4dd6-8909-6be3b82a80d0
begin
if isnothing(file_db)
md"""
Selected file with alternative parameters: _nothing_
Please select a file with alternative parameters!
"""
else
db = DataFrame(CSV.File(file_db["data"]))
diff = RetentionParameterEstimator.difference_estimation_to_alternative_data(res, db)
pbox = RetentionParameterEstimator.plotbox(diff)
md"""
Selected file with alternative parameters: $(file_db["name"])
$(embed_display(db))
Difference of parameters to estimation:
$(embed_display(diff))
Boxplot of relative differences:
$(embed_display(pbox))
"""
end
end
# ╔═╡ 903d12f1-6d6f-4a71-8095-7457cffcafc4
md"""
## Isothermal ``\ln{k}`` data
Select file with isothermal measured ``\ln{k}`` data:
"""
# ╔═╡ a41148bc-03b0-4bd1-b76a-7406aab63f48
if own_data == true
md"""
$(@bind file_isolnk FilePicker([MIME("text/csv")]))
"""
else
file_isolnk = RetentionParameterEstimator.download_data("https://raw.githubusercontent.com/JanLeppert/RetentionParameterEstimator.jl/main/data/isothermal_lnk_df05_Rxi5SilMS.csv");
end
# ╔═╡ 66287dfc-fe77-4fa8-8892-79d2d3de6cb3
begin
if isnothing(file_isolnk)
md"""
Selected isothermal measured ``\ln{k}`` data: _nothing_
Please select a file of isothermal measured ``\ln{k}`` data!
"""
else
isolnk = DataFrame(CSV.File(file_isolnk["data"]))
md"""
Selected isothermal measured ``\ln{k}`` data: $(file_isolnk["name"])
$(embed_display(isolnk))
"""
end
end
# ╔═╡ b4fc2f6e-4500-45da-852e-59fb084e365a
md"""
## Plot ``\ln{k}`` over ``T``
Select solute for ``\ln{k}`` plot:
$(@bind select_solute confirm(Select(meas_select[4]; default=meas_select[4][1])))
"""
# ╔═╡ c0f0b955-6791-401f-8252-745332c4210f
begin
gr()
#p_lnk_ = plot(xlabel=L"T \mathrm{\; in \: °C}", ylabel=L"\ln{k}", title=select_solute, minorticks=4, minorgrid=true)
p_lnk_ = plot(xlabel="T in °C", ylabel="lnk", title=select_solute, minorticks=4, minorgrid=true)
if @isdefined isolnk
scatter!(p_lnk_, isolnk.T.-273.15, isolnk[!, select_solute], label="isothermal meas.")
end
if @isdefined db
i = findfirst(RetentionParameterEstimator.GasChromatographySimulator.CAS_identification(select_solute).CAS.==db.CAS)
if !isnothing(i)
lnk_db(T) = (db.DeltaCp[i]/8.31446261815324 + (db.Tchar[i]+273.15)/db.thetachar[i])*((db.Tchar[i]+273.15)/(T+273.15)-1) + db.DeltaCp[i]/8.31446261815324*log((T+273.15)/(db.Tchar[i]+273.15))
RetentionParameterEstimator.plot_lnk!(p_lnk_, lnk_db, RetentionParameterEstimator.Tmin(meas)*0.925, (Telu_max[findfirst(select_solute.==meas[4])]-273.15)*1.025, lbl="alternative database")
end
end
res_f = filter([:Name] => x -> x == select_solute, res)
lnk(T) = (Measurements.value.(res_f.ΔCp[1])/8.31446261815324 + Measurements.value.(res_f.Tchar[1])/Measurements.value.(res_f.θchar[1]))*(Measurements.value.(res_f.Tchar[1])/(T+273.15)-1) + Measurements.value.(res_f.ΔCp[1])/8.31446261815324*log((T+273.15)/Measurements.value.(res_f.Tchar[1]))
RetentionParameterEstimator.plot_lnk!(p_lnk_, lnk, RetentionParameterEstimator.Tmin(meas)*0.925, (Telu_max[findfirst(select_solute.==meas[4])]-273.15)*1.025, lbl=select_mode)
RetentionParameterEstimator.add_min_max_marker!(p_lnk_, RetentionParameterEstimator.Tmin(meas), Telu_max[findfirst(select_solute.==meas[4])]-273.15, lnk)
p_lnk_
end
# ╔═╡ b6c2ad4d-6fc5-4700-80e3-f616c0b9aa91
begin
gr()
#p_lnk_all = plot(xlabel=L"T \mathrm{\; in \: °C}", ylabel=L"\ln{k}", title="all", minorticks=4, minorgrid=true, legend=:none)
p_lnk_all = plot(xlabel="T in °C", ylabel="lnk", title="all", minorticks=4, minorgrid=true, legend=:none)
for i=1:length(meas_select[4])
res_ = filter([:Name] => x -> x == meas_select[4][i], res)
lnk(T) = (Measurements.value.(res_.ΔCp[1])/8.31446261815324 + Measurements.value.(res_.Tchar[1])/Measurements.value.(res_.θchar[1]))*(Measurements.value.(res_.Tchar[1])/(T+273.15)-1) + Measurements.value.(res_.ΔCp[1])/8.31446261815324*log((T+273.15)/Measurements.value.(res_.Tchar[1]))
RetentionParameterEstimator.plot_lnk!(p_lnk_all, lnk, RetentionParameterEstimator.Tmin(meas_select)*0.925, (Telu_max[i]-273.15)*1.025, lbl=meas_select[4][i])
RetentionParameterEstimator.add_min_max_marker!(p_lnk_all, RetentionParameterEstimator.Tmin(meas_select), Telu_max[i]-273.15, lnk)
end
p_lnk_all
end
# ╔═╡ 9178967d-26dc-43be-b6e4-f35bbd0b0b04
md"""
# End
"""
# ╔═╡ Cell order:
# ╠═09422105-a747-40ac-9666-591326850d8f
# ╟─eb5fc23c-2151-47fa-b56c-5771a4f8b9c5
# ╟─f46b165e-67d9-402f-a225-72d1082007be
# ╟─6d4ec54b-01b2-4208-9b9e-fcb70d236c3e
# ╟─ebc2a807-4413-4721-930a-6328ae72a1a9
# ╟─51a22a15-24f9-4280-9c91-32e48727003a
# ╟─eb14e619-82d4-49ac-ab2a-28e56230dbc6
# ╟─d745c22b-1c96-4a96-83da-abb1df91ab87
# ╟─b2c254a2-a5d6-4f18-803a-75d048fc7cdf
# ╟─f3ffd4ce-a378-4033-88e9-bc1fb8cc4bbe
# ╟─e98f4b1b-e577-40d0-a7d8-71c53d99ee1b
# ╟─3b40b0b1-7007-48c7-b47b-dbeaf501b73d
# ╟─8cc151a6-a60a-4aba-a813-1a142a073948
# ╟─0f4c35c4-32f7-4d11-874d-1f23daad7da8
# ╟─b4f17579-9994-46e1-a3d0-6030650f0dbe
# ╟─0d61fd05-c0c6-4764-9f96-3b867a456ad3
# ╟─38d1e196-f375-48ac-bc11-80b10472c1cd
# ╟─762c877d-3f41-49de-a7ea-0eef1456ac11
# ╟─07a7e45a-d73e-4a83-9323-700d3e2a88cc
# ╟─ae424251-f4f7-48aa-a72b-3be85a193705
# ╟─116ccf37-4a18-44ac-ae6e-98932901a8b0
# ╟─157f24e6-1664-41c8-9079-b9dd0a2c98a9
# ╟─f6a55039-8c32-4d21-8119-fc48e3bf0dc2
# ╟─e16e8b29-0f85-43b6-a405-6ab60b0e3ee0
# ╟─f83b26e7-f16d-49ac-ab3b-230533ac9c82
# ╟─62c014d5-5e57-49d7-98f8-dace2f1aaa32
# ╟─5123aa1b-b899-4dd6-8909-6be3b82a80d0
# ╟─903d12f1-6d6f-4a71-8095-7457cffcafc4
# ╟─a41148bc-03b0-4bd1-b76a-7406aab63f48
# ╟─66287dfc-fe77-4fa8-8892-79d2d3de6cb3
# ╟─b4fc2f6e-4500-45da-852e-59fb084e365a
# ╟─c0f0b955-6791-401f-8252-745332c4210f
# ╟─b6c2ad4d-6fc5-4700-80e3-f616c0b9aa91
# ╟─9178967d-26dc-43be-b6e4-f35bbd0b0b04
| RetentionParameterEstimator | https://github.com/GasChromatographyToolbox/RetentionParameterEstimator.jl.git |
|
[
"MIT"
] | 0.1.10 | 406bb881a8d4ac57347dc2b50294b059db2750e6 | code | 7285 | # functions used to estimate start values of the parameters
"""
reference_holdup_time(prog, L, d, gas; control="Pressure")
Calculate the reference holdup time for the GC program `prog` for a column with length `L` and diameter `d` and `gas` as mobile phase. The reference holdup time is the holdup time at the reference temperature 150°C.
"""
function reference_holdup_time(col, prog; control="Pressure")
Tref = 150.0
# estimate the time of the temperature program for T=Tref
t_ = prog.time_steps
T_ = prog.temp_steps
knots = Interpolations.deduplicate_knots!(T_ .- Tref; move_knots=true)
interp = interpolate((knots, ), cumsum(t_), Gridded(Linear()))
tref = interp(0.0)
# inlet and outlet pressure at time tref
Fpin_ref = prog.Fpin_itp(tref)
pout_ref = prog.pout_itp(tref)
# hold-up time calculated for the time of the program, when T=Tref
tMref = GasChromatographySimulator.holdup_time(Tref+273.15, Fpin_ref, pout_ref, col.L, col.d, col.gas, control=control)
return tMref
end
"""
elution_temperature(tRs, prog)
Calculate the elution temperatures from retention times `tRs` and GC programs `prog`.
"""
function elution_temperature(tRs, prog)
Telus = Array{Float64}(undef, length(tRs))
for i=1:length(tRs)
if ismissing(tRs[i])
Telus[i] = NaN
else
Telus[i] = prog.T_itp(0.0, tRs[i])
end
end
return Telus
end
"""
estimate_start_parameter_single_ramp(tRs::DataFrame, col, prog; time_unit="min", control="Pressure")
Estimation of initial parameters for `Tchar`, `θchar` and `ΔCp` based on the elution temperatures calculated from the retention times `tR` and GC programs `prog` for column `col`.
For this function it is assumed, that single ramp heating programs are used. The elution temperatures of all measurements are calculated and than interpolated over the heating rates. For a dimensionless heating rate of 0.6 the elution temperature and the characteristic temperature of a substance are nearly equal.
Based on this estimated `Tchar` estimates for the initial values of `θchar` and `ΔCp` are calculated as
``
\\theta_{char,init} = 22 \\left(\\frac{T_{char,init}}{T_{st}}\\right)^{0.7} \\left(1000\\frac{d_f}{d}\\right)^{0.09} °C
``
and
``
\\Delta C_p = (-180 + 0.63 T_{char,init}) \\mathrm{J mol^{-1} K^{-1}}
``
# Output
* `Tchar_est` ... estimate for initial guess of the characteristic temperature
* `θchar_est` ... estimate for initial guess of θchar
* `ΔCp_est` ... estimate for initial guess of ΔCp
* `Telu_max` ... the maximum of the calculated elution temperatures of the solutes
"""
function estimate_start_parameter_single_ramp(tRs::DataFrame, col, prog; time_unit="min", control="Pressure")
if time_unit == "min"
a = 60.0
else
a = 1.0
end
tR_meas = Array(tRs[:,2:end]).*a
nt, ns = size(tR_meas)
tMref = Array{Float64}(undef, nt)
RT = Array{Float64}(undef, nt)
Telu_meas = Array{Float64}(undef, nt, ns)
for i=1:nt
tMref[i] = reference_holdup_time(col, prog[i]; control=control)/a
# single-ramp temperature programs with ramp between time_steps 2 and 3 are assumed
RT[i] = (prog[i].temp_steps[3] - prog[i].temp_steps[2])/prog[i].time_steps[3]*a
Telu_meas[i,:] = elution_temperature(tR_meas[i,:], prog[i])
end
rT = RT.*tMref./θref
Telu_max = Array{Float64}(undef, ns)
Tchar_est = Array{Float64}(undef, ns)
θchar_est = Array{Float64}(undef, ns)
ΔCp_est = Array{Float64}(undef, ns)
for i=1:ns
knots = Interpolations.deduplicate_knots!(Telu_meas[:,i]; move_knots=true)
interp = interpolate((rT .- rT_nom, ), knots, Gridded(Linear()))
Telu_max[i] = maximum(Telu_meas[:,i])
Tchar_est[i] = interp(0.0)
θchar_est[i] = 22.0*(Tchar_est[i]/Tst)^0.7*(1000*col.df/col.d)^0.09
ΔCp_est[i] = -52.0 + 0.34*Tchar_est[i]
end
return Tchar_est, θchar_est, ΔCp_est, Telu_max
end
"""
estimate_start_parameter_mean_elu_temp(tRs::DataFrame, col, prog; time_unit="min", control="Pressure")
Estimation of initial parameters for `Tchar`, `θchar` and `ΔCp` based on the elution temperatures calculated from the retention times `tR` and GC programs `prog` for column `col`.
This function is used, if the temperature program is not a single ramp heating program. The elution temperatures of all measurements are calculated and the mean value of the elution temperatures is used as the initial characteristic temperature of a substance.
Based on this estimated `Tchar` estimates for the initial values of `θchar` and `ΔCp` are calculated as
``
\\theta_{char,init} = 22 \\left(\\frac{T_{char,init}}{T_{st}}\\right)^{0.7} \\left(1000\\frac{d_f}{d}\\right)^{0.09} °C
``
and
``
\\Delta C_p = (-52 + 0.34 T_{char,init}) \\mathrm{J mol^{-1} K^{-1}}
``
# Output
* `Tchar_est` ... estimate for initial guess of the characteristic temperature
* `θchar_est` ... estimate for initial guess of θchar
* `ΔCp_est` ... estimate for initial guess of ΔCp
* `Telu_max` ... the maximum of the calculated elution temperatures of the solutes
"""
function estimate_start_parameter_mean_elu_temp(tRs::DataFrame, col, prog; time_unit="min")
if time_unit == "min"
a = 60.0
else
a = 1.0
end
tR_meas = Array(tRs[:,2:end]).*a
nt, ns = size(tR_meas)
Telu_max = Array{Float64}(undef, ns)
Tchar_est = Array{Float64}(undef, ns)
θchar_est = Array{Float64}(undef, ns)
ΔCp_est = Array{Float64}(undef, ns)
for j=1:ns
Telu = Array{Float64}(undef, nt)
for i=1:nt
Telu[i] = elution_temperature(tR_meas[i,j], prog[i])[1]
end
Telu_max[j] = maximum(Telu)
Tchar_est[j] = mean(Telu)
θchar_est[j] = 22.0*(Tchar_est[j]/273.15)^0.7*(1000*col.df/col.d)^0.09
ΔCp_est[j] = -52.0 + 0.34*Tchar_est[j]
end
return Tchar_est, θchar_est, ΔCp_est, Telu_max
end
"""
estimate_start_parameter(tRs::DataFrame, col, prog; time_unit="min", control="Pressure")
Estimation of initial parameters for `Tchar`, `θchar` and `ΔCp` based on the elution temperatures calculated from the retention times `tR` and GC programs `prog` for column `col`.
The initial value of `Tchar` is estimated from the elution temperatures of the measurements.
Based on this estimated `Tchar` estimates for the initial values of `θchar` and `ΔCp` are calculated as
``
\\theta_{char,init} = 22 \\left(\\frac{T_{char,init}}{T_{st}}\\right)^{0.7} \\left(1000\\frac{d_f}{d}\\right)^{0.09} °C
``
and
``
\\Delta C_p = (-52 + 0.34 T_{char,init}) \\mathrm{J mol^{-1} K^{-1}}
``
# Output
* `Tchar_est` ... estimate for initial guess of the characteristic temperature
* `θchar_est` ... estimate for initial guess of θchar
* `ΔCp_est` ... estimate for initial guess of ΔCp
* `Telu_max` ... the maximum of the calculated elution temperatures of the solutes
"""
function estimate_start_parameter(tR_meas::DataFrame, col, prog; time_unit="min", control="Pressure")
Tchar_est, θchar_est, ΔCp_est, Telu_max = try
estimate_start_parameter_single_ramp(tR_meas, col, prog; time_unit=time_unit, control=control)
catch
estimate_start_parameter_mean_elu_temp(tR_meas, col, prog; time_unit=time_unit)
end
return Tchar_est, θchar_est, ΔCp_est, Telu_max
end | RetentionParameterEstimator | https://github.com/GasChromatographyToolbox/RetentionParameterEstimator.jl.git |
|
[
"MIT"
] | 0.1.10 | 406bb881a8d4ac57347dc2b50294b059db2750e6 | code | 11189 | # functions for loading the measured retention data and the necessary informations (column definition, programs, ...)
# this function is a modified version from GasChromatographySimulator
# if it works, put this function into GasChromatographySimulator
"""
Program(TP, FpinP, L; pout="vacuum", time_unit="min")
Construct the structure `Program` with conventional formulation (see function `conventional_program` in `GasChromatographySimulator.jl`) of programs for the case
without a thermal gradient.
# Arguments
* `TP`: conventional formulation of a temperature program.
* `FpinP`: conventional formulation of a Flow (in m³/s) resp. inlet pressure (in Pa(a)) program.
* `L`: Length of the capillary measured in m (meter).
* `pout`: Outlet pressure, "vacuum" (default), "atmosphere" or the outlet pressure in Pa(a).
* `time_unit`: unit of time in the programs, "min"` (default) times are measured in minutes, "s" times are measured in seconds.
The argument `L` is used to construct the temperature interpolation `T_itp(x,t)`.
# Examples
```julia
julia> Program((40.0, 1.0, 5.0, 280.0, 2.0, 20.0, 320.0, 2.0),
(400000.0, 10.0, 5000.0, 500000.0, 20.0),
10.0)
```
"""
function Program(TP, FpinP, L; pout="vacuum", time_unit="min")
ts1, Ts = GasChromatographySimulator.conventional_program(TP; time_unit=time_unit)
ts2, Fps = GasChromatographySimulator.conventional_program(FpinP; time_unit=time_unit)
# remove additional 0.0 which are not at the first position
ts1_ = ts1[[1; findall(0.0.!=ts1)]]
Ts_ = Ts[[1; findall(0.0.!=ts1)]]
ts2_ = ts2[[1; findall(0.0.!=ts2)]]
Fps_ = Fps[[1; findall(0.0.!=ts2)]]
time_steps = GasChromatographySimulator.common_time_steps(ts1_, ts2_)
temp_steps = GasChromatographySimulator.new_value_steps(Ts_, ts1_, time_steps)
Fpin_steps = GasChromatographySimulator.new_value_steps(Fps_, ts2_, time_steps)
if pout == "vacuum"
pout_steps = zeros(length(time_steps))
elseif isa(pout, Number)
pout_steps = pout.*ones(length(time_steps))
else
pout_steps = pn.*ones(length(time_steps))
end
a_gf = [zeros(length(time_steps)) zeros(length(time_steps)) L.*ones(length(time_steps)) zeros(length(time_steps))]
gf(x) = GasChromatographySimulator.gradient(x, a_gf)
T_itp = GasChromatographySimulator.temperature_interpolation(time_steps, temp_steps, gf, L)
Fpin_itp = GasChromatographySimulator.steps_interpolation(time_steps, Fpin_steps)
pout_itp = GasChromatographySimulator.steps_interpolation(time_steps, pout_steps)
prog = GasChromatographySimulator.Program(time_steps, temp_steps, Fpin_steps, pout_steps, gf, a_gf, T_itp, Fpin_itp, pout_itp)
return prog
end
function extract_temperature_and_pressure_programs(TPprog)
iP = findall(occursin.("p", names(TPprog))) # column index of pressure information
pamb = TPprog[!, iP[end]]
#iT = 2:(iP[1]-1) # column index of temperature program information
TPs = TPprog[!,1:(iP[1]-1)]
PPs = DataFrame(measurement = TPs.measurement, p1 = TPprog[!,iP[1]].+pamb, t1 = TPs.t1) # pressure program in Pa(a)
iTrate = findall(occursin.("RT", names(TPs))) # column index of heating rates
heatingrates = Array{Array{Union{Missing, Float64}}}(undef, length(iTrate))
Tdiff = Array{Array{Union{Missing, Float64}}}(undef, length(iTrate))
for i=1:length(iTrate)
heatingrates[i] = TPs[!, iTrate[i]]
Tdiff[i] = TPs[!, iTrate[i]+1] .- TPs[!, iTrate[i]-2]
end
pressurerates = Array{Array{Union{Missing, Float64}}}(undef, length(iTrate))
for i=1:length(iTrate)
pressurerates[i] = (TPprog[!,iP[i+1]] .- TPprog[!,iP[i]]) .* heatingrates[i] ./ Tdiff[i]
if !ismissing(pressurerates[i] != zeros(length(pressurerates[i])))
PPs[!,"RP$(i)"] = pressurerates[i]
PPs[!,"p$(i+1)"] = TPprog[!, iP[i+1]].+pamb
PPs[!,"t$(i+1)"] = TPs[!,"t$(i+1)"]
else
PPs[!,"t$(i)"] = TPs[!,"t$(i+1)"] .+ Tdiff[i] ./ heatingrates[i]
end
end
PPs[!,"pamb"] = pamb
return TPs, PPs
end
function extract_measured_program(TPprog, path, L)
# load the measured programs
prog = Array{GasChromatographySimulator.Program}(undef, length(TPprog.filename))
for i=1:length(TPprog.filename)
meas_prog = DataFrame(CSV.File(joinpath(path, TPprog.filename[i]), silencewarnings=true))
prog[i] = GasChromatographySimulator.Program(meas_prog.timesteps, meas_prog.tempsteps, meas_prog.pinsteps, meas_prog.poutsteps, L)
end
return prog
end
"""
load_chromatograms(file; filter_missing=true)
Loading of the chromatographic data (column information, GC program information, retention time information, see also "Structure of input data") from a file.
# Arguments
* `file` ... path to the file.
* `filter_missing` ... option to ignore solutes, where some retention times are not given. (`default = true`).
# Output
A tuple of the following quantities:
* `col` ... settings of the column as `GasChromatographySimulator.Column` structure.
* `prog` ... Array of the GC programs as `GasChromatographySimulator.Program` structure.
* `tRs` ... DataFrame of the retention times.
* `solute_names` ... Vector of the solute names.
* `pout` ... outlet pressure (detector pressure), "vacuum" or "atmospheric".
* `time_unit` ... unit of time scale used in the retention times and GC programs, "min" or "s".
"""
function load_chromatograms(file; filter_missing=true, delim=";") # new version -> check case for `filter_missing=false` and missing values in further functions!!!!
n = open(f->countlines(f), file)
#col_df = DataFrame(CSV.File(file, header=1, limit=1, stringtype=String, silencewarnings=true, delim=delim, types=[Float64, Float64, Float64, String, String, String, String]))
col_df = DataFrame(CSV.File(file, header=1, limit=1, stringtype=String, silencewarnings=true, delim=delim, types=Dict("L" => Float64, "d" => Float64, "df" => Float64)))
col = GasChromatographySimulator.Column(col_df.L[1], col_df.d[1], col_df.df[1], col_df.sp[1], col_df.gas[1])
pout = col_df.pout[1]
time_unit = col_df.time_unit[1]
n_meas = Int((n - 2 - 2)/2)
TPprog = DataFrame(CSV.File(file, header=3, limit=n_meas, stringtype=String, silencewarnings=true, delim=delim))
#PP = DataFrame(CSV.File(file, header=3+n_meas+1, limit=n_meas, stringtype=String)) # convert pressures from Pa(g) to Pa(a), add p_atm to this data set
tRs_ = DataFrame(CSV.File(file, header=n-n_meas, stringtype=String, silencewarnings=true, delim=delim))
solute_names_ = names(tRs_)[2:end] # filter non-solute names out (columnx)
filter!(x -> !occursin.("Column", x), solute_names_)
if names(TPprog)[2] == "filename"
path = dirname(file)
prog = extract_measured_program(TPprog, path, col.L)
else
TPs, PPs = extract_temperature_and_pressure_programs(TPprog)
prog = Array{GasChromatographySimulator.Program}(undef, length(TPs.measurement))
for i=1:length(TPs.measurement)
if pout == "atmospheric"
pout_ = PPs[i, end]
else
pout_ = "vacuum"
end
prog[i] = Program(collect(skipmissing(TPs[i, 2:end])), collect(skipmissing(PPs[i, 2:(end-1)])), col.L; pout=pout_, time_unit=time_unit)
end
end
# filter out substances with retention times missing
if filter_missing == true
solute_names = solute_names_[findall((collect(any(ismissing, c) for c in eachcol(tRs_))).==false)[2:end].-1]
tRs = tRs_[!,findall((collect(any(ismissing, c) for c in eachcol(tRs_))).==false)]
else
solute_names = solute_names_ # what to do with missing entries for the optimization????
tRs = tRs_
end
return col, prog, tRs[!,1:(length(solute_names)+1)], solute_names, pout, time_unit#, TPs, PPs
end
"""
load_chromatograms(file::Dict{Any, Any}; filter_missing=true, path=joinpath(dirname(pwd()), "data", "exp_pro"))
Loading of the chromatographic data (column information, GC program information, retention time information, see also "Structure of input data") from a file selected by the FilePicker in a Pluto notebook.
# Arguments
* `file` ... file dictionary from the FilePicker.
* `filter_missing` ... option to ignore solutes, where some retention times are not given. (`default = true`).
* `path` ... if the temperature programs are defined by measured temperatures over time, define the path to these files.
# Output
A tuple of the following quantities:
* `col` ... settings of the column as `GasChromatographySimulator.Column` structure.
* `prog` ... Array of the GC programs as `GasChromatographySimulator.Program` structure.
* `tRs` ... DataFrame of the retention times.
* `solute_names` ... Vector of the solute names.
* `pout` ... outlet pressure (detector pressure), "vacuum" or "atmospheric".
* `time_unit` ... unit of time scale used in the retention times and GC programs, "min" or "s".
"""
function load_chromatograms(file::Dict{Any, Any}; filter_missing=true, path=joinpath(dirname(pwd()), "data", "exp_pro"), delim=";") # if file is the output of FilePicker()
n = length(CSV.File(file["data"]; silencewarnings=true, comment=";;"))+1
col_df = DataFrame(CSV.File(file["data"], header=1, limit=1, stringtype=String, silencewarnings=true, delim=delim))
col = GasChromatographySimulator.Column(convert(Float64, col_df.L[1]), col_df.d[1], col_df.df[1], col_df.sp[1], col_df.gas[1])
pout = col_df.pout[1]
time_unit = col_df.time_unit[1]
n_meas = Int((n - 2 - 2)/2)
TPprog = DataFrame(CSV.File(file["data"], header=3, limit=n_meas, stringtype=String, silencewarnings=true, delim=delim))
#PP = DataFrame(CSV.File(file, header=3+n_meas+1, limit=n_meas, stringtype=String)) # convert pressures from Pa(g) to Pa(a), add p_atm to this data set
tRs_ = DataFrame(CSV.File(file["data"], header=n-n_meas, stringtype=String, silencewarnings=true, comment=";;", delim=delim))
solute_names_ = names(tRs_)[2:end] # filter non-solute names out (columnx)
filter!(x -> !occursin.("Column", x), solute_names_)
if names(TPprog)[2] == "filename"
#path = dirname(file)
prog = extract_measured_program(TPprog, path, col.L)
else
TPs, PPs = extract_temperature_and_pressure_programs(TPprog)
prog = Array{GasChromatographySimulator.Program}(undef, length(TPs.measurement))
for i=1:length(TPs.measurement)
if pout == "vacuum"
pout_ = "vacuum"
else # pout="atmospheric"
pout_ = PPs[i, end]
end
prog[i] = Program(collect(skipmissing(TPs[i, 2:end])), collect(skipmissing(PPs[i, 2:(end-1)])), col.L; pout=pout_, time_unit=time_unit)
end
end
# filter out substances with retention times missing
if filter_missing == true
solute_names = solute_names_[findall((collect(any(ismissing, c) for c in eachcol(tRs_))).==false)[2:end].-1]
tRs = tRs_[!,findall((collect(any(ismissing, c) for c in eachcol(tRs_))).==false)]
else
solute_names = solute_names_
tRs = tRs_
end
return col, prog, tRs[!,1:(length(solute_names)+1)], solute_names, pout, time_unit#, TPs, PPs
end | RetentionParameterEstimator | https://github.com/GasChromatographyToolbox/RetentionParameterEstimator.jl.git |
|
[
"MIT"
] | 0.1.10 | 406bb881a8d4ac57347dc2b50294b059db2750e6 | code | 6429 | # functions defining the loss-function
"""
tR_calc(Tchar, θchar, ΔCp, L, d, prog, gas; opt=std_opt)
Calculates the retention time tR for a solute with the K-centric parameters `Tchar` `θchar` and `ΔCp` for a column with length `L`, internal diameter `d`, the (conventional) program `prog`, options `opt` and mobile phase `gas`.
For this calculation only the ODE for the migration of a solute in a GC column is solved, using the function `GasChromatographySimulator.solving_migration`.
"""
function tR_calc(Tchar, θchar, ΔCp, L, d, prog, gas; opt=std_opt)
# df has no influence on the result (is hidden in Tchar, θchar, ΔCp)
# replace the following using GasChromatographySimulator.solving_migration or GasChromatographySimulator.solving_odesystem_r (but reworked for autodiffablity)
# also add possibility of quantities with uncertainties!!!
# can Optimization.jl work with uncertainties???
k(x,t,Tchar_,θchar_,ΔCp_) = exp((ΔCp_/R + Tchar_/θchar_)*(Tchar_/prog.T_itp(x,t)-1) + ΔCp_/R*real(log(Complex(prog.T_itp(x,t)/Tchar_))))
rM(x,t,L_,d_) = GasChromatographySimulator.mobile_phase_residency(x,t, prog.T_itp, prog.Fpin_itp, prog.pout_itp, L_, d_, gas; ng=opt.ng, vis=opt.vis, control=opt.control)
r(t,p,x) = (1+k(x,t,p[1],p[2],p[3]))*rM(x,t,p[4],p[5])
t₀ = 0.0
xspan = (0.0, L)
p = (Tchar, θchar, ΔCp, L, d)
prob = ODEProblem(r, t₀, xspan, p)
solution = solve(prob, alg=opt.alg, abstol=opt.abstol, reltol=opt.reltol)
tR = solution.u[end]
return tR
end
# use this as the main function
function tR_calc(Tchar, θchar, ΔCp, L, d, df, prog, gas; opt=std_opt)
# df has no influence on the result (is hidden in Tchar, θchar, ΔCp)
solution = GasChromatographySimulator.solving_migration(prog.T_itp, prog.Fpin_itp, prog.pout_itp, L, d, df, Tchar, θchar, ΔCp, d/df, gas, opt; kwargs...)
tR = solution.u[end]
return tR
end
"""
tR_τR_calc(Tchar, θchar, ΔCp, L, d, df, prog, Cag, t₀, τ₀, gas; opt=std_opt)
Calculates the retention time tR for a solute with the K-centric parameters `Tchar` `θchar` and `ΔCp` for a column with length `L`, internal diameter `d`, film thickness `df`, the (conventional) program `prog`, diffusirivity coefficient `Cag`, start time `t₀`, initial peak width `τ₀`, options `opt` and mobile phase `gas`.
For this calculation the ODE-system for the migration of a solute and peak width in a GC column is solved, using the function `GasChromatographySimulator.solving_odesystem_r`.
The result is a tuple of retention time `tR` and peak width `τR`.
"""
function tR_τR_calc(Tchar, θchar, ΔCp, L, d, df, prog, Cag, t₀, τ₀, gas; opt=std_opt)
solution = GasChromatographySimulator.solving_odesystem_r(L, d, df, gas, prog.T_itp, prog.Fpin_itp, prog.pout_itp, Tchar, θchar, ΔCp, df/d, Cag, t₀, τ₀, opt)
tR = solution.u[end][1]
τR = sqrt(solution.u[end][2])
return tR, τR
end
"""
loss(tR, Tchar, θchar, ΔCp, L, d, prog, gas; opt=std_opt, metric="squared")
Loss function as sum of squares of the residuals between the measured and calculated retention times.
# Arguments
* `tR` ... mxn-array of the measured retention times in seconds.
* `Tchar` ... n-array of characteristic temperatures in K.
* `θchar` ... n-array of characteristic constants in °C.
* `ΔCp` ... n-array of the change of adiabatic heat capacity in J mol^-1 K^-1.
* `L` ... number of the length of the column in m.
* `d` ... number of the diameters of the column in m.
* `prog` ... m-array of structure GasChromatographySimulator.Programs containing the definition of the GC-programs.
* `gas` ... string of name of the mobile phase gas.
# Output
The output is a tuple of the following quantites:
* `sum((tR.-tRcalc).^2)` ... sum of the squared residuals over m GC-programs and n solutes.
"""
function loss(tR, Tchar, θchar, ΔCp, L, d, prog, gas; opt=std_opt, metric="squared")
if length(size(tR)) == 1
ns = 1 # number of solutes
np = size(tR)[1] # number of programs
elseif length(size(tR)) == 0
ns = 1
np = 1
else
ns = size(tR)[2]
np = size(tR)[1]
end
tRcalc = Array{Any}(undef, np, ns)
for j=1:ns
for i=1:np
tRcalc[i,j] = tR_calc(Tchar[j], θchar[j], ΔCp[j], L, d, prog[i], gas; opt=opt)
end
end
if metric == "abs"
l = sum(abs.(tR.-tRcalc))/(ns*np)
elseif metric == "squared"
l = sum((tR.-tRcalc).^2)/(ns*np)
else
l = sum((tR.-tRcalc).^2)/(ns*np)
end
return l
end
function loss(tR, Tchar, θchar, ΔCp, φ₀, L, d, df, prog, gas; opt=std_opt, metric="squared")
if length(size(tR)) == 1
ns = 1 # number of solutes
np = size(tR)[1] # number of programs
elseif length(size(tR)) == 0
ns = 1
np = 1
else
ns = size(tR)[2]
np = size(tR)[1]
end
tRcalc = Array{Any}(undef, np, ns)
for j=1:ns
for i=1:np
tRcalc[i,j] = tR_calc(Tchar[j], θchar[j], ΔCp[j], φ₀, L, d, df, prog[i], gas; opt=opt)
end
end
if metric == "abs"
l = sum(abs.(tR.-tRcalc))/(ns*np)
elseif metric == "squared"
l = sum((tR.-tRcalc).^2)/(ns*np)
else
l = sum((tR.-tRcalc).^2)/(ns*np)
end
return l
end
function tR_calc_(A, B, C, L, d, β, prog, gas; opt=std_opt)
k(x,t,A_,B_,C_,β_) = exp(A_ + B_/prog.T_itp(x,t) + C_*log(prog.T_itp(x,t)) - log(β_))
#rM(x,t,L,d) = GasChromatographySimulator.mobile_phase_residency(x,t, prog.T_itp, prog.Fpin_itp, prog.pout_itp, L, d, gas; ng=opt.ng, vis=opt.vis, control=opt.control)
rM(x,t,L_,d_) = 64 * sqrt(prog.Fpin_itp(t)^2 - x/L*(prog.Fpin_itp(t)^2-prog.pout_itp(t)^2)) / (prog.Fpin_itp(t)^2-prog.pout_itp(t)^2) * L_/d_^2 * GasChromatographySimulator.viscosity(x, t, prog.T_itp, gas, vis=opt.vis) * prog.T_itp(x, t)
r(t,p,x) = (1+k(x,t,p[1],p[2],p[3], p[6]))*rM(x,t,p[4],p[5])
t₀ = 0.0
xspan = (0.0, L)
p = (A, B, C, L, d, β)
prob = ODEProblem(r, t₀, xspan, p)
solution = solve(prob, alg=opt.alg, abstol=opt.abstol, reltol=opt.reltol)
tR = solution.u[end]
return tR
end
function loss_(tR, A, B, C, L, d, β, prog, gas; opt=std_opt, metric="squared")
# loss function as sum over all programs and solutes
if length(size(tR)) == 1
ns = 1 # number of solutes
np = size(tR)[1] # number of programs
elseif length(size(tR)) == 0
ns = 1
np = 1
else
ns = size(tR)[2]
np = size(tR)[1]
end
tRcalc = Array{Any}(undef, np, ns)
for j=1:ns
for i=1:np
tRcalc[i,j] = tR_calc(A[j], B[j], C[j], L, d, β, prog[i], opt, gas)
end
end
if metric == "abs"
l = sum(abs.(tR.-tRcalc))/(ns*np)
elseif metric == "squared"
l = sum((tR.-tRcalc).^2)/(ns*np)
else
l = sum((tR.-tRcalc).^2)/(ns*np)
end
return l, tRcalc
end | RetentionParameterEstimator | https://github.com/GasChromatographyToolbox/RetentionParameterEstimator.jl.git |
|
[
"MIT"
] | 0.1.10 | 406bb881a8d4ac57347dc2b50294b059db2750e6 | code | 4196 | # additional misc. functions
function compare!(df, db)
ΔTchar = Array{Float64}(undef, length(df.Name))
Δθchar = Array{Float64}(undef, length(df.Name))
ΔΔCp = Array{Float64}(undef, length(df.Name))
relΔTchar = Array{Float64}(undef, length(df.Name))
relΔθchar = Array{Float64}(undef, length(df.Name))
relΔΔCp = Array{Float64}(undef, length(df.Name))
for i=1:length(df.Name)
ii = findfirst(df.Name[i].==db.Name)
if isnothing(ii)
ΔTchar[i] = NaN
Δθchar[i] = NaN
ΔΔCp[i] = NaN
relΔTchar[i] = NaN
relΔθchar[i] = NaN
relΔΔCp[i] = NaN
else
ΔTchar[i] = df.Tchar[i] - (db.Tchar[ii] + 273.15)
Δθchar[i] = df.θchar[i] - db.thetachar[ii]
ΔΔCp[i] = df.ΔCp[i] - db.DeltaCp[ii]
relΔTchar[i] = ΔTchar[i]/(db.Tchar[ii] + 273.15)
relΔθchar[i] = Δθchar[i]/db.thetachar[ii]
relΔΔCp[i] = ΔΔCp[i]/db.DeltaCp[ii]
end
end
df[!, :ΔTchar] = ΔTchar
df[!, :Δθchar] = Δθchar
df[!, :ΔΔCp] = ΔΔCp
df[!, :relΔTchar] = relΔTchar
df[!, :relΔθchar] = relΔθchar
df[!, :relΔΔCp] = relΔΔCp
return df
end
function simulate_measurements(compare, meas, result::DataFrame; opt=std_opt)
L = compare[1].L
sp = compare[1].sp
φ₀ = compare[1].df/compare[1].d
gas = compare[1].gas
prog = compare[2]
solute_names = result.Name
if compare[6] == "min"
a = 60.0
else
a = 1.0
end
CAS = GasChromatographySimulator.CAS_identification(string.(result.Name)).CAS
sub = Array{GasChromatographySimulator.Substance}(undef, length(solute_names))
for i=1:length(solute_names)
#ii = findfirst(solute_names[i].==result[j].Name)
if "θchar" in names(result)
Cag = GasChromatographySimulator.diffusivity(CAS[i], gas)
sub[i] = GasChromatographySimulator.Substance(result.Name[i], CAS[i], result.Tchar[i], result.θchar[i], result.ΔCp[i], φ₀, "", Cag, 0.0, 0.0)
else
Cag = GasChromatographySimulator.diffusivity(CAS[i], gas)
sub[i] = GasChromatographySimulator.Substance(result.Name[i], CAS[i], result.Tchar[i]+273.15, result.thetachar[i], result.DeltaCp[i], φ₀, "", Cag, 0.0, 0.0)
end
end
par = Array{GasChromatographySimulator.Parameters}(undef, length(prog))
pl = Array{DataFrame}(undef, length(prog))
loss = Array{Float64}(undef, length(prog))
for i=1:length(compare[2])
#ii = findfirst(solute_names[i].==solute_names_compare)
if "d" in names(result)
d = mean(result.d) # if d was estimate use the mean value
else
d = compare[1].d
end
λ = meas[1].L/d
col = GasChromatographySimulator.Column(λ*d, d, φ₀*d, sp, gas)
par[i] = GasChromatographySimulator.Parameters(col, compare[2][i], sub, opt)
try
pl[i] = GasChromatographySimulator.simulate(par[i])[1]
catch
pl[i] = DataFrame(Name=solute_names, tR=NaN.*ones(length(solute_names)), τR=NaN.*ones(length(solute_names)))
end
#CAS = GasChromatographySimulator.CAS_identification(string.(result.Name)).CAS
ΔtR = Array{Float64}(undef, length(solute_names))
relΔtR = Array{Float64}(undef, length(solute_names))
for k=1:length(solute_names)
kk = findfirst(pl[i].Name[k].==compare[4])
tR_compare = Array(compare[3][i, 2:(length(compare[4])+1)]).*a
ΔtR[k] = pl[i].tR[k] - tR_compare[kk]
relΔtR[k] = (pl[i].tR[k] - tR_compare[kk])/tR_compare[kk]
end
pl[i][!, :ΔtR] = ΔtR
pl[i][!, :relΔtR] = relΔtR
loss[i] = sum(ΔtR.^2)/length(ΔtR)
end
return pl, loss, par
end
"""
separate_error_columns(res)
If the result dataframe `res` contains columns with `Measurements` typed values, these columns will be split in two. The first column contains the value and uses the original column name. The second column contains the uncertainty and the name of the column is the original name with an added "_uncertainty". If the column is not of the `Measurements` type, it will be copied as is for the new dataframe.
"""
function separate_error_columns(res)
new_res = DataFrame()
for i=1:size(res)[2]
if typeof(res[!,i]) == Array{Measurement{Float64}, 1}
new_res[!, names(res)[i]] = Measurements.value.(res[!,i])
new_res[!, names(res)[i]*"_uncertainty"] = Measurements.uncertainty.(res[!,i])
else
new_res[!, names(res)[i]] = res[!,i]
end
end
return new_res
end | RetentionParameterEstimator | https://github.com/GasChromatographyToolbox/RetentionParameterEstimator.jl.git |
|
[
"MIT"
] | 0.1.10 | 406bb881a8d4ac57347dc2b50294b059db2750e6 | code | 10228 | # functions for the notebook
function download_data(url)
io = IOBuffer();
download = urldownload(url, save_raw=io);
return Dict{Any, Any}("data" => io, "name" => split(url, '/')[end])
end
# filter function for selected measurements and selected solutes
function filter_selected_measurements(meas, selected_measurements, selected_solutes)
index_measurements = Array{Int}(undef, length(selected_measurements))
for i=1:length(selected_measurements)
index_measurements[i] = findfirst(selected_measurements[i].==meas[3].measurement)
end
index_solutes = Array{Int}(undef, length(selected_solutes))
for i=1:length(selected_solutes)
if isnothing(findfirst(selected_solutes[i].==names(meas[3])))
error("The names of selected analytes are different from the measurement file.")
else
index_solutes[i] = findfirst(selected_solutes[i].==names(meas[3]))
end
end
meas_select = (meas[1], meas[2][index_measurements], meas[3][index_measurements, [1; index_solutes]], selected_solutes, meas[5], meas[6])
return meas_select
end
# comparing predicted retention times using the optimization result `res` with the measured retention times `comp`
function comparison(res, comp)
opt = GasChromatographySimulator.Options(ng=true, odesys=false)
#CAS_comp = GasChromatographySimulator.CAS_identification(comp[4]).CAS
#CAS_meas = GasChromatographySimulator.CAS_identification(meas[4]).CAS
i_sub = findall(x->x in res.Name, comp[4]) # indices of common elements of CAS_comp in CAS_meas (indeices relative to CAS_meas)
sub = Array{GasChromatographySimulator.Substance}(undef, length(i_sub))
for i in i_sub
id = GasChromatographySimulator.CAS_identification(res.Name[i])
#if ismissing(id)
# id_C15= GasChromatographySimulator.CAS_identification("pentadecane")
# Cag = GasChromatographySimulator.diffusivity(id_C15, comp[1].gas) # use C15 value
# CAS = id_C15.CAS
#else
Cag = GasChromatographySimulator.diffusivity(id, comp[1].gas)
CAS = id.CAS
#end
if "θchar" in names(res)
sub[i] = GasChromatographySimulator.Substance(res.Name[i], CAS, Measurements.value(res.Tchar[i]), Measurements.value(res.θchar[i]), Measurements.value(res.ΔCp[i]), comp[1].df/comp[1].d, "", Cag, 0.0, 0.0)
else
sub[i] = GasChromatographySimulator.Substance(res.Name[i], CAS, res.Tchar[i]+273.15, res.thetachar[i], res.DeltaCp[i], comp[1].df/comp[1].d, "", Cag, 0.0, 0.0)
end
# this is for phase ratio as set in comp
end
if comp[6] == "min"
a = 60.0
else
a = 1.0
end
par = Array{GasChromatographySimulator.Parameters}(undef, length(comp[3].measurement))
pl = Array{DataFrame}(undef, length(comp[3].measurement))
loss = Array{Float64}(undef, length(comp[3].measurement))
for i=1:length(comp[3].measurement)
#ii = findfirst(solute_names[i].==solute_names_compare)
if "d" in names(res)
d = mean(Measurements.value.(res.d)) # if d was estimate use the mean value
elseif @isdefined col_input
d = col_input.d/1000.0
else
d = comp[1].d
end
col = GasChromatographySimulator.Column(comp[1].L, d, comp[1].df/comp[1].d*d, comp[1].sp, comp[1].gas)
par[i] = GasChromatographySimulator.Parameters(col, comp[2][i], sub, opt)
#try
#pl[i] = GasChromatographySimulator.simulate(par[i])[1]
sol, peak = GasChromatographySimulator.solve_separate_multithreads(par[i])
pl[i] = peaklist(sol, peak, par[i])
#catch
# pl[i] = DataFrame(Name=comp[4], tR=NaN.*ones(length(comp[4])), τR=NaN.*ones(length(comp[4])))
#end
#CAS = GasChromatographySimulator.CAS_identification(string.(result[j].Name)).CAS
ΔtR = Array{Float64}(undef, length(comp[4]))
relΔtR = Array{Float64}(undef, length(comp[4]))
for k=1:length(comp[4])
kk = findfirst(pl[i].Name[k].==comp[4])
tR_compare = Array(comp[3][i, 2:(length(comp[4])+1)]).*a
ΔtR[k] = pl[i].tR[k] - tR_compare[kk]
relΔtR[k] = (pl[i].tR[k] - tR_compare[kk])/tR_compare[kk]
end
pl[i][!, :tR] = pl[i].tR./a
pl[i][!, :τR] = pl[i].τR./a
pl[i][!, :ΔtR] = ΔtR./a
pl[i][!, :relΔtR] = relΔtR
loss[i] = sum(ΔtR.^2)/length(ΔtR)
end
return pl, loss, par
end
# from GasChromatographySimulator
function peaklist(sol, peak, par)
n = length(par.sub)
No = Array{Union{Missing, Int64}}(undef, n)
Name = Array{String}(undef, n)
CAS = Array{String}(undef, n)
tR = Array{Float64}(undef, n)
TR = Array{Float64}(undef, n)
σR = Array{Float64}(undef, n)
uR = Array{Float64}(undef, n)
τR = Array{Float64}(undef, n)
kR = Array{Float64}(undef, n)
Res = fill(NaN, n)
Δs = fill(NaN, n)
Annotations = Array{String}(undef, n)
#Threads.@threads for i=1:n
for i=1:n
Name[i] = par.sub[i].name
CAS[i] = par.sub[i].CAS
if sol[i].t[end]==par.col.L
tR[i] = sol[i].u[end]
TR[i] = par.prog.T_itp(par.col.L, tR[i]) - 273.15
uR[i] = 1/GasChromatographySimulator.residency(par.col.L, tR[i], par.prog.T_itp, par.prog.Fpin_itp, par.prog.pout_itp, par.col.L, par.col.d, par.col.df, par.col.gas, par.sub[i].Tchar, par.sub[i].θchar, par.sub[i].ΔCp, par.sub[i].φ₀; ng=par.opt.ng, vis=par.opt.vis, control=par.opt.control, k_th=par.opt.k_th)
τR[i] = sqrt(peak[i].u[end])
σR[i] = τR[i]*uR[i]
kR[i] = GasChromatographySimulator.retention_factor(par.col.L, tR[i], par.prog.T_itp, par.col.d, par.col.df, par.sub[i].Tchar, par.sub[i].θchar, par.sub[i].ΔCp, par.sub[i].φ₀; k_th=par.opt.k_th)
else
tR[i] = NaN
TR[i] = NaN
uR[i] = NaN
τR[i] = NaN
σR[i] = NaN
kR[i] = NaN
end
No[i] = try
parse(Int,split(par.sub[i].ann, ", ")[end])
catch
missing
end
if ismissing(No[i])
Annotations[i] = par.sub[i].ann
else
Annotations[i] = join(split(par.sub[i].ann, ", ")[1:end-1], ", ")
end
end
df = sort!(DataFrame(No = No, Name = Name, CAS = CAS, tR = tR, τR = τR, TR=TR, σR = σR, uR = uR, kR = kR, Annotations = Annotations, ), [:tR])
#Threads.@threads for i=1:n-1
for i=1:n-1
Res[i] = (df.tR[i+1] - df.tR[i])/(2*(df.τR[i+1] + df.τR[i]))
Δs[i] = (df.tR[i+1] - df.tR[i])/(df.τR[i+1] - df.τR[i]) * log(df.τR[i+1]/df.τR[i])
end
df[!, :Res] = Res
df[!, :Δs] = Δs
return select(df, [:No, :Name, :CAS, :tR, :τR, :TR, :σR, :uR, :kR, :Res, :Δs, :Annotations])
end
function plot_chromatogram_comparison(pl, meas, comp; lines=true, annotation=true)
gr()
p_chrom = Array{Plots.Plot}(undef, length(pl))
for i=1:length(pl)
p_chrom[i] = plot(xlabel="time in $(meas[6])", legend=false)
max_ = maximum(GasChromatographySimulator.plot_chromatogram(pl[i], (minimum(pl[i].tR)*0.95, maximum(pl[i].tR)*1.05))[3])*1.05
min_ = - max_/20
GasChromatographySimulator.plot_chromatogram!(p_chrom[i], pl[i], (minimum(pl[i].tR)*0.95, maximum(pl[i].tR)*1.05); annotation=false, mirror=false)
xlims!((minimum(pl[i].tR)*0.95, maximum(pl[i].tR)*1.05))
ylims!(min_, max_)
#add marker for measured retention times
for j=1:length(comp[4])
plot!(p_chrom[i], comp[3][i,j+1].*ones(2), [min_, max_], c=:orange)
# add lines between measured and calculated retention times
jj = findfirst(comp[4][j].==pl[i].Name)
apex = GasChromatographySimulator.chromatogram([pl[i].tR[jj]], [pl[i].tR[jj]], [pl[i].τR[jj]])[1]
if lines == true
plot!(p_chrom[i], [comp[3][i,j+1], pl[i].tR[jj]], [max_, apex], c=:grey, linestyle=:dash)
if annotation==true
plot!(p_chrom[i], annotations = (pl[i].tR[jj], apex, text(j, 10, rotation=0, :center)))
end
end
end
plot!(p_chrom[i], title=comp[3].measurement[i])
end
return plot(p_chrom..., layout=(length(p_chrom),1), size=(800,length(p_chrom)*200), yaxis=nothing, grid=false)
end
function plot_lnk!(p, lnk, Tmin, Tmax; lbl="")
Trange = Tmin:(Tmax-Tmin)/100.0:Tmax
lnk_calc = Array{Float64}(undef, length(Trange))
for i=1:length(Trange)
lnk_calc[i] = lnk(Trange[i])
end
plot!(p, Trange, lnk_calc, label=lbl)
return p
end
function Tmin(meas)
Tmin_ = Array{Float64}(undef, length(meas[2]))
for i=1:length(meas[2])
Tmin_[i] = minimum(meas[2][i].temp_steps)
end
return minimum(Tmin_)
end
function add_min_max_marker!(p, Tmin, Tmax, lnk)
scatter!(p, (Tmin, lnk(Tmin)), markershape=:rtriangle, markersize=5, c=:black, label = "measured temperature range")
scatter!(p, (Tmax, lnk(Tmax)), markershape=:ltriangle, markersize=5, c=:black, label = "measured temperature range")
return p
end
function plotbox(df1)
p_box1 = boxplot(ylabel="relative difference in %", legend=false)
#for i=1:length(unique(df1.methode))
# df1f = filter([:methode] => x -> x .== unique(df1.methode)[i], df1)
boxplot!(p_box1, ["Tchar"], df1.relΔTchar.*100.0)
dotplot!(p_box1, ["Tchar"], df1.relΔTchar.*100.0, marker=(:black, stroke(0)))
boxplot!(p_box1, ["θchar"], df1.relΔθchar.*100.0)
dotplot!(p_box1, ["θchar"], df1.relΔθchar.*100.0, marker=(:black, stroke(0)))
boxplot!(p_box1, ["ΔCp"], df1.relΔΔCp.*100.0)
dotplot!(p_box1, ["ΔCp"], df1.relΔΔCp.*100.0, marker=(:black, stroke(0)))
#end
return p_box1
end
function difference_estimation_to_alternative_data(res, db)
ΔTchar = Array{Float64}(undef, length(res.Name))
Δθchar = Array{Float64}(undef, length(res.Name))
ΔΔCp = Array{Float64}(undef, length(res.Name))
relΔTchar = Array{Float64}(undef, length(res.Name))
relΔθchar = Array{Float64}(undef, length(res.Name))
relΔΔCp = Array{Float64}(undef, length(res.Name))
for i=1:length(res.Name)
ii = findfirst(GasChromatographySimulator.CAS_identification(res.Name[i]).CAS.==db.CAS)
if !isnothing(ii)
ΔTchar[i] = Measurements.value(res.Tchar[i]) - (db.Tchar[ii] + 273.15)
Δθchar[i] = Measurements.value(res.θchar[i]) - db.thetachar[ii]
ΔΔCp[i] = Measurements.value(res.ΔCp[i]) - db.DeltaCp[ii]
relΔTchar[i] = ΔTchar[i]/(db.Tchar[ii] + 273.15)
relΔθchar[i] = Δθchar[i]/db.thetachar[ii]
relΔΔCp[i] = ΔΔCp[i]/db.DeltaCp[ii]
else
ΔTchar[i] = NaN
Δθchar[i] = NaN
ΔΔCp[i] = NaN
relΔTchar[i] = NaN
relΔθchar[i] = NaN
relΔΔCp[i] = NaN
end
end
diff = DataFrame(Name=res.Name, ΔTchar=ΔTchar, Δθchar=Δθchar, ΔΔCp=ΔΔCp, relΔTchar=relΔTchar, relΔθchar=relΔθchar, relΔΔCp=relΔΔCp)
return diff
end | RetentionParameterEstimator | https://github.com/GasChromatographyToolbox/RetentionParameterEstimator.jl.git |
|
[
"MIT"
] | 0.1.10 | 406bb881a8d4ac57347dc2b50294b059db2750e6 | code | 31102 | # functions used for the optimization of the loss-function
#------------------
# Optimization only for the K-centric retention parameters
"""
opt_Kcentric(x, p)
Function used for optimization of the loss-function in regards to the three K-centric parameters.
# Arguments
* `x` ... 3n-vector of the three K-centric parameters of n solutes. Elements 1:n are Tchar, n+1:2n are θchar and 2n+1:3n are ΔCp values.
* `p` ... vector containing the fixed parameters:
* `tR = p[1]` ... mxn-array of the measured retention times in seconds.
* `L = p[2]` ... number of the length of the column in m.
* `d = p[3]` ... number of the diameters of the column in m.
* `prog = p[4]` ... m-array of structure GasChromatographySimulator.Programs containing the definition of the GC-programs.
* `opt = p[5]` ... struture GasChromatographySimulator.Options containing the settings for options for the simulation.
* `gas = p[6]` ... string of name of the mobile phase gas.
* `metric = p[7]` ... string of the metric used for the loss function (`squared` or `abs`).
# Output
* `sum((tR.-tRcalc).^2)` ... sum of the squared residuals over m GC-programs and n solutes.
"""
function opt_Kcentric(x, p)
# for 2-parameter version `x` is shorter, the part for `ΔCp = x[2*ns+1+1:3*ns+1]` will be missing
# instead set `ΔCp = zeros(ns)`
tR = p[1]
L = p[2]
d = p[3]
prog = p[4]
opt = p[5]
gas = p[6]
metric = p[7]
if length(size(tR)) == 1
ns = 1
else
ns = size(tR)[2]
end
Tchar = x[1:ns] # Array length = number solutes
θchar = x[ns+1:2*ns] # Array length = number solutes
ΔCp = x[2*ns+1:3*ns] # Array length = number solutes
return loss(tR, Tchar, θchar, ΔCp, L, d, prog, gas; opt=opt, metric=metric)[1]
end
function opt_Kcentric_(x, p)
# for 2-parameter version `x` is shorter, the part for `ΔCp = x[2*ns+1+1:3*ns+1]` will be missing
# instead set `ΔCp = zeros(ns)`
tR = p[1]
φ₀ = p[2]
L = p[3]
d = p[4]
df = p[5]
prog = p[6]
opt = p[7]
gas = p[8]
metric = p[9]
if length(size(tR)) == 1
ns = 1
else
ns = size(tR)[2]
end
Tchar = x[1:ns] # Array length = number solutes
θchar = x[ns+1:2*ns] # Array length = number solutes
ΔCp = x[2*ns+1:3*ns] # Array length = number solutes
return loss(tR, Tchar, θchar, ΔCp, φ₀, L, d, df, prog, gas; opt=opt, metric=metric)[1]
end
"""
optimize_Kcentric(tR, col, prog, Tchar_e::Vector{T}, θchar_e::Vector{T}, ΔCp_e::Vector{T}; method=NewtonTrustRegion(), opt=std_opt, maxiters=10000, metric="squared")
Optimization regarding the estimization of the retention parameters `Tchar`, `θchar` and `ΔCp`. The initial guess is a vector (`Tchar_e`, `θchar_e` and `ΔCp_e`) for optimization
algorithms, which do not need lower/upper bounds. If a method is used, which needs lower/upper bounds the initial parameters should be a matrix, with the first column (`[1,:]`)
beeing the initial guess, the second column (`[2,:]`) the lower bound and the third column (`[3,:]`) the upper bound.
"""
function optimize_Kcentric(tR, col, prog, Tchar_e::Vector{T}, θchar_e::Vector{T}, ΔCp_e::Vector{T}; method=NewtonTrustRegion(), opt=std_opt, maxiters=10000, maxtime=600.0, metric="squared") where T<:Number
p = (tR, col.L, col.d, prog, opt, col.gas, metric)
x0 = [Tchar_e; θchar_e; ΔCp_e]
optf = OptimizationFunction(opt_Kcentric, Optimization.AutoForwardDiff())
prob = Optimization.OptimizationProblem(optf, x0, p, f_calls_limit=maxiters)
opt_sol = solve(prob, method, maxiters=maxiters, maxtime=maxtime)
return opt_sol
end
function optimize_Kcentric(tR, col, prog, Tchar_e::Matrix{T}, θchar_e::Matrix{T}, ΔCp_e::Matrix{T}; method=BBO_adaptive_de_rand_1_bin_radiuslimited(), opt=std_opt, maxiters=10000, maxtime=600.0, metric="squared") where T<:Number # here default method should be one which needs bounds
p = (tR, col.L, col.d, prog, opt, col.gas, metric)
x0 = [Tchar_e[1,:]; θchar_e[1,:]; ΔCp_e[1,:]]
lb = [Tchar_e[2,:]; θchar_e[2,:]; ΔCp_e[2,:]]
ub = [Tchar_e[3,:]; θchar_e[3,:]; ΔCp_e[3,:]]
optf = OptimizationFunction(opt_Kcentric, Optimization.AutoForwardDiff())
prob = Optimization.OptimizationProblem(optf, x0, p, lb=lb, ub=ub, f_calls_limit=maxiters)
opt_sol = solve(prob, method, maxiters=maxiters, maxtime=maxtime)
return opt_sol
end
function optimize_Kcentric_(tR, col, prog, Tchar_e::Vector{T}, θchar_e::Vector{T}, ΔCp_e::Vector{T}; method=NewtonTrustRegion(), opt=std_opt, maxiters=10000, maxtime=600.0, metric="squared") where T<:Number
p = (tR, col.df/col.d, col.L, col.d, col.df, prog, opt, col.gas, metric)
x0 = [Tchar_e; θchar_e; ΔCp_e]
optf = OptimizationFunction(opt_Kcentric_, Optimization.AutoForwardDiff())
prob = Optimization.OptimizationProblem(optf, x0, p, f_calls_limit=maxiters)
opt_sol = solve(prob, method, maxiters=maxiters, maxtime=maxtime)
return opt_sol
end
"""
opt_dKcentric(x, p)
Function used for optimization of the loss-function in regards to the three K-centric parameters and the column diameter.
# Arguments
* `x` ... (3n+1)-vector of the column diameter and the three K-centric parameters of n solutes. The first element represents the column diameterd `d` and elements 2:n+1 are `Tchar`, n+2:2n+1 are `θchar` and 2n+2:3n+1 are `ΔCp` values.
* `p` ... vector containing the fixed parameters:
* `tR = p[1]` ... mxn-array of the measured retention times in seconds.
* `L = p[2]` ... number of the length of the column in m.
* `prog = p[3]` ... m-array of structure GasChromatographySimulator.Programs containing the definition of the GC-programs.
* `opt = p[4]` ... struture GasChromatographySimulator.Options containing the settings for options for the simulation.
* `gas = p[5]` ... string of name of the mobile phase gas.
* `metric = p[6]` ... string of the metric used for the loss function (`squared` or `abs`).
# Output
* `sum((tR.-tRcalc).^2)` ... sum of the squared residuals over m GC-programs and n solutes.
"""
function opt_dKcentric(x, p)
# for 2-parameter version `x` is shorter, the part for `ΔCp = x[2*ns+1+1:3*ns+1]` will be missing
# instead set `ΔCp = zeros(ns)`
tR = p[1]
L = p[2]
prog = p[3]
opt = p[4]
gas = p[5]
metric = p[6]
if length(size(tR)) == 1
ns = 1
else
ns = size(tR)[2]
end
d = x[1]
Tchar = x[2:ns+1] # Array length = number solutes
θchar = x[ns+1+1:2*ns+1] # Array length = number solutes
ΔCp = x[2*ns+1+1:3*ns+1] # Array length = number solutes
return loss(tR, Tchar, θchar, ΔCp, L, d, prog, gas; opt=opt, metric=metric)
end
function opt_dKcentric_(x, p)
# for 2-parameter version `x` is shorter, the part for `ΔCp = x[2*ns+1+1:3*ns+1]` will be missing
# instead set `ΔCp = zeros(ns)`
tR = p[1]
φ₀ = p[2]
L = p[3]
df = p[4]
prog = p[5]
opt = p[6]
gas = p[7]
metric = p[8]
if length(size(tR)) == 1
ns = 1
else
ns = size(tR)[2]
end
d = x[1]
Tchar = x[2:ns+1] # Array length = number solutes
θchar = x[ns+1+1:2*ns+1] # Array length = number solutes
ΔCp = x[2*ns+1+1:3*ns+1] # Array length = number solutes
return loss(tR, Tchar, θchar, ΔCp, φ₀, L, d, df, prog, gas; opt=opt, metric=metric)
end
"""
optimize_dKcentric(tR, col, prog, d_e, Tchar_e, θchar_e, ΔCp_e; method=NewtonTrustRegion(), opt=std_opt, maxiters=10000, metric="squared")
Optimization regarding the estimization of the column diameter `d` and the retention parameters `Tchar`, `θchar` and `ΔCp`. The initial guess is a number (`d_e`) or a vector (`Tchar_e`, `θchar_e` and `ΔCp_e`) for optimization
algorithms, which do not need lower/upper bounds. If a method is used, which needs lower/upper bounds the initial parameters should be a vector of length 3 (`d_e`) or a matrix (`Tchar_e`, `θchar_e` and `ΔCp_e`), with the first element/column
beeing the initial guess, the second element/column the lower bound and the third element/column the upper bound.
"""
function optimize_dKcentric(tR, col, prog, d_e::Number, Tchar_e::Vector{T}, θchar_e::Vector{T}, ΔCp_e::Vector{T}; method=NewtonTrustRegion(), opt=std_opt, maxiters=10000, maxtime=600.0, metric="squared") where T<:Number
p = (tR, col.L, prog, opt, col.gas, metric)
x0 = [d_e; Tchar_e; θchar_e; ΔCp_e]
optf = OptimizationFunction(opt_dKcentric, Optimization.AutoForwardDiff())
prob = Optimization.OptimizationProblem(optf, x0, p, f_calls_limit=maxiters)
opt_sol = solve(prob, method, maxiters=maxiters, maxtime=maxtime)
return opt_sol
end
function optimize_dKcentric(tR, col, prog, d_e::Vector{T}, Tchar_e::Matrix{T}, θchar_e::Matrix{T}, ΔCp_e::Matrix{T}; method=BBO_adaptive_de_rand_1_bin_radiuslimited(), opt=opt_std, maxiters=10000, maxtime=600.0, metric="squared") where T<:Number # here default method should be one which needs bounds
p = (tR, col.L, prog, opt, col.gas, metric)
x0 = [d_e[1]; Tchar_e[1,:]; θchar_e[1,:]; ΔCp_e[1,:]]
lb = [d_e[2]; Tchar_e[2,:]; θchar_e[2,:]; ΔCp_e[2,:]]
ub = [d_e[3]; Tchar_e[3,:]; θchar_e[3,:]; ΔCp_e[3,:]]
optf = OptimizationFunction(opt_dKcentric, Optimization.AutoForwardDiff())
prob = Optimization.OptimizationProblem(optf, x0, p, lb=lb, ub=ub, f_calls_limit=maxiters)
opt_sol = solve(prob, method, maxiters=maxiters, maxtime=maxtime)
return opt_sol
end
function optimize_dKcentric_(tR, col, prog, d_e::Number, Tchar_e::Vector{T}, θchar_e::Vector{T}, ΔCp_e::Vector{T}; method=NewtonTrustRegion(), opt=std_opt, maxiters=10000, maxtime=600.0, metric="squared") where T<:Number
p = (tR, col.df/col.d, col.L, col.df, prog, opt, col.gas, metric)
x0 = [d_e; Tchar_e; θchar_e; ΔCp_e]
optf = OptimizationFunction(opt_dKcentric_, Optimization.AutoForwardDiff())
prob = Optimization.OptimizationProblem(optf, x0, p, f_calls_limit=maxiters)
opt_sol = solve(prob, method, maxiters=maxiters, maxtime=maxtime)
return opt_sol
end
function estimate_parameters(tRs, solute_names, col, prog, rp1_e, rp2_e, rp3_e; method=NewtonTrustRegion(), opt=std_opt, maxiters=10000, maxtime=600.0, mode="dKcentric", metric="squared", pout="vacuum", time_unit="min")
# mode = "Kcentric", "Kcentric_single", "dKcentric", "dKcentric_single"
# add the case for 2-parameter model, where rp3 === 0.0 always
# -> alternative versions of the different `optimize_` functions (without the third retention parameter)
# -> similar, alternative versions for `opt_` functions needed
if time_unit == "min"
a = 60.0
else
a = 1.0
end
tR_meas = Array(tRs[:,2:end]).*a
if length(size(tR_meas)) == 1
ns = 1
else
ns = size(tR_meas)[2]
end
d_e = col.d
rp1 = Array{Float64}(undef, ns)
rp2 = Array{Float64}(undef, ns)
rp3 = Array{Float64}(undef, ns)
min = Array{Float64}(undef, ns)
#retcode = Array{Any}(undef, ns)
if mode == "Kcentric_single"
sol = Array{SciMLBase.OptimizationSolution}(undef, ns)
for j=1:ns
sol[j] = optimize_Kcentric(tR_meas[:,j], col, prog, rp1_e[j,:], rp2_e[j,:], rp3_e[j,:]; method=method, opt=opt, maxiters=maxiters, maxtime=maxtime, metric=metric)
rp1[j] = sol[j][1]
rp2[j] = sol[j][2]
rp3[j] = sol[j][3]
min[j] = sol[j].minimum
#retcode[j] = sol[j].retcode
end
df = DataFrame(Name=solute_names, Tchar=rp1, θchar=rp2, ΔCp=rp3, min=min)#, retcode=retcode)
elseif mode == "Kcentric"
sol = optimize_Kcentric(tR_meas, col, prog, rp1_e, rp2_e, rp3_e; method=method, opt=opt, maxiters=maxiters, maxtime=maxtime, metric=metric)
rp1 = sol[1:ns] # Array length = number solutes
rp2 = sol[ns+1:2*ns] # Array length = number solutes
rp3 = sol[2*ns+1:3*ns] # Array length = number solutes
for j=1:ns
min[j] = sol.minimum
#retcode[j] = sol.retcode
end
df = DataFrame(Name=solute_names, Tchar=rp1, θchar=rp2, ΔCp=rp3, min=min)#, retcode=retcode)
elseif mode == "dKcentric"
sol = optimize_dKcentric(tR_meas, col, prog, d_e, rp1_e, rp2_e, rp3_e; method=method, opt=opt, maxiters=maxiters, maxtime=maxtime, metric=metric)
d = sol[1].*ones(ns)
rp1 = sol[2:ns+1] # Array length = number solutes
rp2 = sol[ns+1+1:2*ns+1] # Array length = number solutes
rp3 = sol[2*ns+1+1:3*ns+1] # Array length = number solutes
for j=1:ns
min[j] = sol.minimum
#retcode[j] = sol.retcode
end
df = DataFrame(Name=solute_names, d=d, Tchar=rp1, θchar=rp2, ΔCp=rp3, min=min)#, retcode=retcode)
elseif mode == "dKcentric_single"
sol = Array{SciMLBase.OptimizationSolution}(undef, ns)
d = Array{Float64}(undef, ns)
for j=1:ns
sol[j] = optimize_dKcentric(tR_meas[:,j], col, prog, d_e, rp1_e[j,:], rp2_e[j,:], rp3_e[j,:]; method=method, opt=opt, maxiters=maxiters, maxtime=maxtime, metric=metric)
d[j] = sol[j][1]
rp1[j] = sol[j][2]
rp2[j] = sol[j][3]
rp3[j] = sol[j][4]
min[j] = sol[j].minimum
#retcode[j] = sol[j].retcode
end
df = DataFrame(Name=solute_names, d=d, Tchar=rp1, θchar=rp2, ΔCp=rp3, min=min)#, retcode=retcode)
end
return df, sol
end
"""
estimate_parameters()
Calculate the estimates for the K-centric parameters and (optional) the column diameter.
# Arguments
* `chrom` ... Tuple of the loaded chromatogram, see [`load_chromatograms`](@ref)
# Options
* `method=NewtonTrustRegion()` ... used optimization method
* `opt=std_opt` ... general options, `std_opt = GasChromatographySimulator.Options(abstol=1e-8, reltol=1e-5, ng=true, odesys=false)`
* `maxiters=10000` ... maximum number of iterations for every single optimization
* `maxtime=600.0` ... maximum time for every single optimization
* `mode="dKcentric"` ... mode of the estimation.
Possible options:
* "Kcentric_single" ... optimization for the three K-centric retention parameters separatly for every solute
* "Kcentric" ... optimization for the three K-centric retention parameters together for all solutes
* "dKcentric_single" ... optimization for the column diameter and the three K-centric retention parameters separatly for every solute
* "dKcentric" ... optimization for the column diameter and the three K-centric retention parameters together for all solutes
* `metric="squared"` ... used metric for the loss function ("squared" or "abs")
# Output
* `df` ... DataFrame with the columns `Name` (solute names), `d` (estimated column diameter, optional), `Tchar` (estimated Tchar), `θchar` (estimated θchar), `ΔCp` (estimated ΔCp) and `min` (value of the loss function at the found optima)
* `sol` ... Array of `SciMLBase.OptimizationSolution` with the results of the optimization with some additional informations.
"""
function estimate_parameters(chrom; method=NewtonTrustRegion(), opt=std_opt, maxiters=10000, maxtime=600.0, mode="dKcentric", metric="squared")
Tchar_est, θchar_est, ΔCp_est, Telu_max = estimate_start_parameter(chrom[3], chrom[1], chrom[2]; time_unit=chrom[6])
return estimate_parameters(chrom[3], chrom[4], chrom[1], chrom[2], Tchar_est, θchar_est, ΔCp_est; method=method, opt=opt, maxiters=maxiters, maxtime=maxtime, mode=mode, metric=metric, pout=chrom[5], time_unit=chrom[6])
end
function estimate_parameters_(tRs, solute_names, col, prog, rp1_e, rp2_e, rp3_e; method=NewtonTrustRegion(), opt=std_opt, maxiters=10000, maxtime=600.0, mode="dKcentric", metric="squared", pout="vacuum", time_unit="min")
# mode = "Kcentric", "Kcentric_single", "dKcentric", "dKcentric_single"
if time_unit == "min"
a = 60.0
else
a = 1.0
end
tR_meas = Array(tRs[:,2:end]).*a
if length(size(tR_meas)) == 1
ns = 1
else
ns = size(tR_meas)[2]
end
d_e = col.d
rp1 = Array{Float64}(undef, ns)
rp2 = Array{Float64}(undef, ns)
rp3 = Array{Float64}(undef, ns)
min = Array{Float64}(undef, ns)
#retcode = Array{Any}(undef, ns)
if mode == "Kcentric_single"
sol = Array{SciMLBase.OptimizationSolution}(undef, ns)
for j=1:ns
sol[j] = optimize_Kcentric_(tR_meas[:,j], col, prog, rp1_e[j,:], rp2_e[j,:], rp3_e[j,:]; method=method, opt=opt, maxiters=maxiters, maxtime=maxtime, metric=metric)
rp1[j] = sol[j][1]
rp2[j] = sol[j][2]
rp3[j] = sol[j][3]
min[j] = sol[j].minimum
#retcode[j] = sol[j].retcode
end
df = DataFrame(Name=solute_names, Tchar=rp1, θchar=rp2, ΔCp=rp3, min=min)#, retcode=retcode)
elseif mode == "Kcentric"
sol = optimize_Kcentric_(tR_meas, col, prog, rp1_e, rp2_e, rp3_e; method=method, opt=opt, maxiters=maxiters, maxtime=maxtime, metric=metric)
rp1 = sol[1:ns] # Array length = number solutes
rp2 = sol[ns+1:2*ns] # Array length = number solutes
rp3 = sol[2*ns+1:3*ns] # Array length = number solutes
for j=1:ns
min[j] = sol.minimum
#retcode[j] = sol.retcode
end
df = DataFrame(Name=solute_names, Tchar=rp1, θchar=rp2, ΔCp=rp3, min=min)#, retcode=retcode)
elseif mode == "dKcentric"
sol = optimize_dKcentric_(tR_meas, col, prog, d_e, rp1_e, rp2_e, rp3_e; method=method, opt=opt, maxiters=maxiters, maxtime=maxtime, metric=metric)
d = sol[1].*ones(ns)
rp1 = sol[2:ns+1] # Array length = number solutes
rp2 = sol[ns+1+1:2*ns+1] # Array length = number solutes
rp3 = sol[2*ns+1+1:3*ns+1] # Array length = number solutes
for j=1:ns
min[j] = sol.minimum
#retcode[j] = sol.retcode
end
df = DataFrame(Name=solute_names, d=d, Tchar=rp1, θchar=rp2, ΔCp=rp3, min=min)#, retcode=retcode)
elseif mode == "dKcentric_single"
sol = Array{SciMLBase.OptimizationSolution}(undef, ns)
d = Array{Float64}(undef, ns)
for j=1:ns
sol[j] = optimize_dKcentric_(tR_meas[:,j], col, prog, d_e, rp1_e[j,:], rp2_e[j,:], rp3_e[j,:]; method=method, opt=opt, maxiters=maxiters, maxtime=maxtime, metric=metric)
d[j] = sol[j][1]
rp1[j] = sol[j][2]
rp2[j] = sol[j][3]
rp3[j] = sol[j][4]
min[j] = sol[j].minimum
#retcode[j] = sol[j].retcode
end
df = DataFrame(Name=solute_names, d=d, Tchar=rp1, θchar=rp2, ΔCp=rp3, min=min)#, retcode=retcode)
end
return df, sol
end
# full methods
"""
check_measurement(meas, col_input; min_th=0.1, loss_th=1.0, se_col=true)
Similar to `method_m1` ([`method_m1`](@ref)) estimate the three retention parameters ``T_{char}``, ``θ_{char}`` and ``ΔC_p`` including standard errors, see [`stderror`](@ref).
In addition, if the found optimized minima is above a threshold `min_th`, it is flagged and the squared differences of single measured retention times and calculated retention times above
another threshold `loss_th` are recorded.
# Arguments
* `meas` ... Tuple with the loaded measurement data, see [`load_chromatograms`](@ref).
* `col_input` ... Named tuple with `col_input.L` the column length in m and `col_input.d` the column diameter in mm. If this parameter is not gicen, than these parameters are taken from `meas`.
* `se_col=true` ... If `true` the standard errors (from the Hessian matrix, see [`stderror`](@ref)) of the estimated parameters are added as separate columns to the result dataframe. If `false` the standard errors are added to the values as `Masurement` type.
# Output
* `check` ... Boolean. `true` if all values are below the thresholds, `false` if not.
* `msg` ... String. Description of `check`
* `df_flag` ... Dataframe containing name of the flagged measurements, solutes and the corresponding mesured and calculated retention times.
* `index_flag` ... Indices of flagged results
* `res` ... Dataframe with the optimized parameters and the found minima.
* `Telu_max` ... The maximum of elution temperatures every solute experiences in the measured programs.
"""
function check_measurement(meas, col_input; min_th=0.1, loss_th=1.0, se_col=true, method=NewtonTrustRegion(), opt=std_opt, maxiters=10000, maxtime=600.0)
col = GasChromatographySimulator.Column(col_input.L, col_input.d*1e-3, meas[1].df, meas[1].sp, meas[1].gas)
Tchar_est, θchar_est, ΔCp_est, Telu_max = RetentionParameterEstimator.estimate_start_parameter(meas[3], col, meas[2]; time_unit=meas[6])
df = estimate_parameters(meas[3], meas[4], col, meas[2], Tchar_est, θchar_est, ΔCp_est; mode="Kcentric_single", pout=meas[5], time_unit=meas[6], method=method, opt=std_opt, maxiters=maxiters, maxtime=maxtime)[1]
index_flag = findall(df.min.>min_th)
if length(index_flag) == 0
check = true
msg = "retention times in normal range"
df_flag = DataFrame()
else
check = false
msg = "discrapancy of retention times detected"
loss_flag, tRcalc, tRmeas = flagged_loss(meas, df, index_flag)
index_loss_flag = findall(loss_flag.>loss_th)
flag_meas = Array{String}(undef, length(index_loss_flag))
flag_sub = Array{String}(undef, length(index_loss_flag))
tRmeas_ = Array{Float64}(undef, length(index_loss_flag))
tRcalc_ = Array{Float64}(undef, length(index_loss_flag))
for i=1:length(index_loss_flag)
flag_meas[i] = meas[3].measurement[index_loss_flag[i][1]]
flag_sub[i] = meas[4][index_flag[index_loss_flag[i][2]]]
tRmeas_[i] = tRmeas[index_loss_flag[i][1], index_loss_flag[i][2]]
tRcalc_[i] = tRcalc[index_loss_flag[i][1], index_loss_flag[i][2]]
end
df_flag = DataFrame(measurement=flag_meas, solute=flag_sub, tRmeas=tRmeas_, tRcalc=tRcalc_)
end
# calculate the standard errors of the 3 parameters using the hessian matrix
stderrors = stderror(meas, df, col_input)[1]
# output dataframe
res = if se_col == true
DataFrame(Name=df.Name, min=df.min, Tchar=df.Tchar, Tchar_std=stderrors.sd_Tchar, θchar=df.θchar, θchar_std=stderrors.sd_θchar, ΔCp=df.ΔCp, ΔCp_std=stderrors.sd_ΔCp)
else
DataFrame(Name=df.Name, min=df.min, Tchar=df.Tchar.±stderrors.sd_Tchar, θchar=df.θchar.±stderrors.sd_θchar, ΔCp=df.ΔCp.±stderrors.sd_ΔCp)
end
return check, msg, df_flag, index_flag, res, Telu_max
end
function flagged_loss(meas, df, index_flag)
if meas[6] == "min"
a = 60.0
else
a = 1.0
end
tRcalc = Array{Float64}(undef, length(meas[3].measurement), length(index_flag))
tRmeas = Array{Float64}(undef, length(meas[3].measurement), length(index_flag))
for j=1:length(index_flag)
for i=1:length(meas[3].measurement)
tRcalc[i,j] = RetentionParameterEstimator.tR_calc(df.Tchar[index_flag[j]], df.θchar[index_flag[j]], df.ΔCp[index_flag[j]], meas[1].L, meas[1].d, meas[2][i], meas[1].gas)
tRmeas[i,j] = meas[3][!, index_flag[j]+1][i]*a
end
end
(tRmeas.-tRcalc).^2, tRcalc, tRmeas
end
"""
method_m1(meas, col_input; se_col=true)
Estimation of the three retention parameters ``T_{char}``, ``θ_{char}`` and ``ΔC_p`` including standard errors, see [`stderror`](@ref).
# Arguments
* `meas` ... Tuple with the loaded measurement data, see [`load_chromatograms`](@ref).
* `col_input` ... Named tuple with `col_input.L` the column length in m and `col_input.d` the column diameter in mm. If this parameter is not gicen, than these parameters are taken from `meas`.
* `se_col=true` ... If `true` the standard errors (from the Hessian matrix, see [`stderror`](@ref)) of the estimated parameters are added as separate columns to the result dataframe. If `false` the standard errors are added to the values as `Masurement` type.
# Output
* `res` ... Dataframe with the optimized parameters and the found minima.
* `Telu_max` ... The maximum of elution temperatures every solute experiences in the measured programs.
"""
function method_m1(meas, col_input; se_col=true, method=NewtonTrustRegion(), opt=std_opt, maxiters=10000, maxtime=600.0)
# definition of the column
col = GasChromatographySimulator.Column(col_input.L, col_input.d*1e-3, meas[1].df, meas[1].sp, meas[1].gas)
# calculate start parameters
Tchar_est, θchar_est, ΔCp_est, Telu_max = estimate_start_parameter(meas[3], col, meas[2]; time_unit=meas[6])
# optimize every solute separatly for the 3 remaining parameters `Tchar`, `θchar`, `ΔCp`
res_ = estimate_parameters(meas[3], meas[4], col, meas[2], Tchar_est, θchar_est, ΔCp_est; mode="Kcentric_single", pout=meas[5], time_unit=meas[6], method=method, opt=std_opt, maxiters=maxiters, maxtime=maxtime)[1]
# calculate the standard errors of the 3 parameters using the hessian matrix
stderrors = stderror(meas, res_, col_input)[1]
# output dataframe
res = if se_col == true
DataFrame(Name=res_.Name, min=res_.min, Tchar=res_.Tchar, Tchar_std=stderrors.sd_Tchar, θchar=res_.θchar, θchar_std=stderrors.sd_θchar, ΔCp=res_.ΔCp, ΔCp_std=stderrors.sd_ΔCp)
else
DataFrame(Name=res_.Name, min=res_.min, Tchar=res_.Tchar.±stderrors.sd_Tchar, θchar=res_.θchar.±stderrors.sd_θchar, ΔCp=res_.ΔCp.±stderrors.sd_ΔCp)
end
return res, Telu_max
end
"""
method_m2(meas; se_col=true)
Estimation of the column diameter ``d`` and three retention parameters ``T_{char}``, ``θ_{char}`` and ``Δ C_p`` including standard errors, see [`stderror`](@ref).
In a first run all four parameters are estimated for every substance separatly, resulting in different optimized column diameters. The mean value of the column diameter is used for
a second optimization using this mean diameter and optimize the remainig thre retention parameters ``T_{char}``, ``θ_{char}`` and ``Δ C_p``.
# Arguments
* `meas` ... Tuple with the loaded measurement data, see [`load_chromatograms`](@ref).
* `col_input` ... Named tuple with `col_input.L` the column length in m and `col_input.d` the column diameter in mm. If this parameter is not gicen, than these parameters are taken from `meas`.
* `se_col=true` ... If `true` the standard errors (from the Hessian matrix, see [`stderror`](@ref)) of the estimated parameters are added as separate columns to the result dataframe. If `false` the standard errors are added to the values as `Masurement` type.
# Output
* `res` ... Dataframe with the optimized parameters and the found minima.
* `Telu_max` ... The maximum of elution temperatures every solute experiences in the measured programs.
"""
function method_m2(meas; se_col=true, method=NewtonTrustRegion(), opt=std_opt, maxiters=10000, maxtime=600.0)
# retention times, use only the solutes, which have non-missing retention time entrys
tRs = meas[3][!,findall((collect(any(ismissing, c) for c in eachcol(meas[3]))).==false)]
# solute names, use only the solutes, which have non-missing retention time entrys
solute_names = meas[4][findall((collect(any(ismissing, c) for c in eachcol(meas[3]))).==false)[2:end].-1]
# calculate start parameters
Tchar_est, θchar_est, ΔCp_est, Telu_max = estimate_start_parameter(tRs, meas[1], meas[2]; time_unit=meas[6])
# optimize every solute separatly for the 4 parameters `Tchar`, `θchar`, `ΔCp` and `d`
res_dKcentric_single = estimate_parameters(tRs, solute_names, meas[1], meas[2], Tchar_est, θchar_est, ΔCp_est; pout=meas[5], time_unit=meas[6], mode="dKcentric_single", method=method, opt=std_opt, maxiters=maxiters, maxtime=maxtime)[1]
# define a new column with the mean value of the estimated `d` over all solutes
new_col = GasChromatographySimulator.Column(meas[1].L, mean(res_dKcentric_single.d), meas[1].df, meas[1].sp, meas[1].gas)
# optimize every solute separatly for the 3 remaining parameters `Tchar`, `θchar`, `ΔCp`
res_ = estimate_parameters(tRs, solute_names, new_col, meas[2], res_dKcentric_single.Tchar, res_dKcentric_single.θchar, res_dKcentric_single.ΔCp; pout=meas[5], time_unit=meas[6], mode="Kcentric_single", method=method, opt=std_opt, maxiters=maxiters, maxtime=maxtime)[1]
res_[!, :d] = mean(res_dKcentric_single.d).*ones(length(res_.Name))
res_[!, :d_std] = std(res_dKcentric_single.d).*ones(length(res_.Name))
# calculate the standard errors of the 3 parameters using the hessian matrix
stderrors = stderror(meas, res_)[1]
# output dataframe
res = if se_col == true
DataFrame(Name=res_.Name, min=res_.min, Tchar=res_.Tchar, Tchar_std=stderrors.sd_Tchar, θchar=res_.θchar, θchar_std=stderrors.sd_θchar, ΔCp=res_.ΔCp, ΔCp_std=stderrors.sd_ΔCp, d=res_.d, d_std=res_.d_std)
else
DataFrame(Name=res_.Name, min=res_.min, Tchar=res_.Tchar.±stderrors.sd_Tchar, θchar=res_.θchar.±stderrors.sd_θchar, ΔCp=res_.ΔCp.±stderrors.sd_ΔCp, d=res_.d.±res_.d_std)
end
return res, Telu_max
end
"""
stderror(meas, res)
Calculation of the standard error of the found optimized parameters using the hessian matrix at the optima.
# Arguments
* `meas` ... Tuple with the loaded measurement data, see [`load_chromatograms`](@ref).
* `res` ... Dataframe with the result of the optimization, see [`estimate_parameters`](@ref).
Optional parameters
* `col_input` ... Named tuple with `col_input.L` the column length in m and `col_input.d` the column diameter in mm. If this parameter is not gicen, than these parameters are taken from `meas`.
# Output
* `stderrors` ... Dataframe with the standard errors of the optimized parameters.
* `Hessian` ... The hessian matrix at the found optims.
"""
function stderror(meas, res)
sdTchar = Array{Float64}(undef, size(res)[1])
sdθchar = Array{Float64}(undef, size(res)[1])
sdΔCp = Array{Float64}(undef, size(res)[1])
Hessian = Array{Any}(undef, size(res)[1])
for i=1:size(res)[1]
# the loss-function used in the optimization
p = [meas[3][!,i+1].*60.0, meas[1].L, meas[1].d, meas[2], std_opt, meas[1].gas, "squared"]
LF(x) = opt_Kcentric(x, p)
# the hessian matrix of the loss-function, calculated with ForwardDiff.jl
H(x) = ForwardDiff.hessian(LF, x)
# the hessian matrix at the found optima
Hessian[i] = H([res.Tchar[i], res.θchar[i], res.ΔCp[i]])
# the calculated standard errors of the parameters
sdTchar[i] = sqrt.(abs.(inv(Hessian[i])))[1,1]
sdθchar[i] = sqrt.(abs.(inv(Hessian[i])))[2,2]
sdΔCp[i] = sqrt.(abs.(inv(Hessian[i])))[3,3]
end
stderrors = DataFrame(Name=res.Name, sd_Tchar=sdTchar, sd_θchar=sdθchar, sd_ΔCp=sdΔCp)
return stderrors, Hessian
end
function stderror(meas, res, col_input)
sdTchar = Array{Float64}(undef, size(res)[1])
sdθchar = Array{Float64}(undef, size(res)[1])
sdΔCp = Array{Float64}(undef, size(res)[1])
Hessian = Array{Any}(undef, size(res)[1])
for i=1:size(res)[1]
# the loss-function used in the optimization
p = [meas[3][!,i+1].*60.0, col_input.L, col_input.d, meas[2], std_opt, meas[1].gas, "squared"]
LF(x) = opt_Kcentric(x, p)
# the hessian matrix of the loss-function, calculated with ForwardDiff.jl
H(x) = ForwardDiff.hessian(LF, x)
# the hessian matrix at the found optima
Hessian[i] = H([res.Tchar[i], res.θchar[i], res.ΔCp[i]])
# the calculated standard errors of the parameters
sdTchar[i] = sqrt.(abs.(inv(Hessian[i])))[1,1]
sdθchar[i] = sqrt.(abs.(inv(Hessian[i])))[2,2]
sdΔCp[i] = sqrt.(abs.(inv(Hessian[i])))[3,3]
end
stderrors = DataFrame(Name=res.Name, sd_Tchar=sdTchar, sd_θchar=sdθchar, sd_ΔCp=sdΔCp)
return stderrors, Hessian
end | RetentionParameterEstimator | https://github.com/GasChromatographyToolbox/RetentionParameterEstimator.jl.git |
|
[
"MIT"
] | 0.1.10 | 406bb881a8d4ac57347dc2b50294b059db2750e6 | code | 823 | module RetentionParameterEstimator
using Reexport
@reexport using CSV
@reexport using DataFrames
using ForwardDiff
using GasChromatographySimulator
using Interpolations
@reexport using Measurements
using Optimization
using OptimizationBBO
using OptimizationCMAEvolutionStrategy
using OptimizationOptimJL
using OptimizationOptimisers
@reexport using Plots
@reexport using PlutoUI
@reexport using StatsPlots
using UrlDownload
@reexport using Statistics
include("Load.jl")
include("Loss.jl")
include("Estimate_Start_Values.jl")
include("Optimization.jl")
include("Simulate_Test.jl")
include("Misc.jl")
include("Notebook.jl")
const θref = 30.0
const rT_nom = 0.69
const Tst = 273.15
const R = 8.31446261815324
const std_opt = GasChromatographySimulator.Options(abstol=1e-8, reltol=1e-5, ng=true, odesys=false)
end # module | RetentionParameterEstimator | https://github.com/GasChromatographyToolbox/RetentionParameterEstimator.jl.git |
|
[
"MIT"
] | 0.1.10 | 406bb881a8d4ac57347dc2b50294b059db2750e6 | code | 4617 | # Functions used to simulate chromatograms to use as test values for the optimization
#"""
#conventional_GC(L, d, df, sp, gas, TP, PP, solutes, db_path, db_file)
#
#Description.
#"""
function conventional_GC(L, d, df, sp, gas, TP, PP, solutes, db_path, db_file; pout="vacuum", time_unit="min")
opt = std_opt
col = GasChromatographySimulator.Column(L, d, df, sp, gas)
prog = GasChromatographySimulator.Program(TP, PP, L; pout=pout, time_unit=time_unit)
sub = GasChromatographySimulator.load_solute_database(db_path, db_file, sp, gas, solutes, zeros(length(solutes)), zeros(length(solutes)))
par = GasChromatographySimulator.Parameters(col, prog, sub, opt)
return par
end
#"""
# sim_test_chrom(L, d, df, sp, gas, TPs, PPs, solutes, db_path, db_file)
#
#Description.
#"""
function sim_test_chrom(L, d, df, sp, gas, TPs, PPs, solutes, db_path, db_file; pout="vacuum", time_unit="min")
par_meas = Array{GasChromatographySimulator.Parameters}(undef, length(TPs))
for i=1:length(TPs)
par_meas[i] = conventional_GC(L, d, df, sp, gas, TPs[i], PPs[i], solutes, db_path, db_file; pout=pout, time_unit=time_unit)
end
tR_meas = Array{Float64}(undef, length(TPs), length(solutes))
for i=1:length(TPs)
pl_meas = GasChromatographySimulator.simulate(par_meas[i])[1]
for j=1:length(solutes)
jj = findfirst(pl_meas.Name.==solutes[j])
tR_meas[i,j] = pl_meas.tR[jj]
end
end
return tR_meas, par_meas
end
function sim_test_chrom(column, TPs, PPs, solutes, db_path, db_file)
tR_meas, par_meas = sim_test_chrom(column[:L], column[:d], column[:df], column[:sp], column[:gas], TPs, PPs, solutes, db_path, db_file; pout=column[:pout], time_unit=column[:time_unit])
return tR_meas, par_meas
end
function convert_vector_of_vector_to_2d_array(TPs)
nTP = Array{Int}(undef, length(TPs))
for i=1:length(TPs)
nTP[i] = length(TPs[i])
end
TPm = Array{Union{Float64,Missing}}(undef, length(TPs), maximum(nTP))
for i=1:length(TPs)
for j=1:maximum(nTP)
if j>length(TPs[i])
TPm[i,j] = missing
else
TPm[i,j] = TPs[i][j]
end
end
end
return TPm
end
function program_header!(df_P, c)
header = Array{Symbol}(undef, size(df_P)[2])
header[1] = :measurement
i1 = 1
i2 = 1
i3 = 1
for i=2:size(df_P)[2]
if i in collect(2:3:size(df_P)[2])
header[i] = Symbol(string(c, "$(i1)"))
i1 = i1 + 1
elseif i in collect(3:3:size(df_P)[2])
header[i] = Symbol("t$(i2)")
i2 = i2 + 1
else
header[i] = Symbol(string("R", c, "$(i3)"))
i3 = i3 + 1
end
end
return rename!(df_P, header)
end
function save_simulation(file, column, measurement_name, TPs, PPs, pamb, solutes, tR_meas)
# save the results:
#
# system information in header
# L, d, df, sp, gas, pout, time_unit
CSV.write(file, DataFrame(column))
# Program
# measurment_name, TP, p1, p2, ... , pamb
TPm = RetentionParameterEstimator.convert_vector_of_vector_to_2d_array(TPs)
PPm = RetentionParameterEstimator.convert_vector_of_vector_to_2d_array(PPs)
df_prog = DataFrame([measurement_name TPm PPm[:,1:3:end].-pamb fill(pamb, length(measurement_name))], :auto)
# header naming for Program
header = Array{Symbol}(undef, size(df_prog)[2])
header[1] = :measurement
header[end] = :pamb
np = size(df_prog)[2] - 2 # number of program columns (temperature and pressure)
nrates = Int((np - 3)/4) # number of rates of the temperature program
i1 = 1
i2 = 1
i3 = 1
for i=2:(np-nrates) # temperature program part
if i in collect(2:3:(np-nrates))
header[i] = Symbol(string("T", "$(i1)"))
i1 = i1 + 1
elseif i in collect(3:3:(np-nrates))
header[i] = Symbol("t$(i2)")
i2 = i2 + 1
else
header[i] = Symbol(string("RT", "$(i3)"))
i3 = i3 + 1
end
end
for i=(np-nrates+1):(size(df_prog)[2]-1) # pressure part
header[i] = Symbol(string("p$(i-(np-nrates))"))
end
rename!(df_prog, header)
CSV.write(file, df_prog, append=true, writeheader=true)
# retention times
# measurement_name, tR
if column[:time_unit] == "min"
a = 60.0
else
a = 1.0
end
df_tR = DataFrame(measurement=measurement_name)
for i=1:length(solutes)
df_tR[!, solutes[i]] = tR_meas[:,i]./a
end
CSV.write(file, df_tR, append=true, writeheader=true)
end | RetentionParameterEstimator | https://github.com/GasChromatographyToolbox/RetentionParameterEstimator.jl.git |
|
[
"MIT"
] | 0.1.10 | 406bb881a8d4ac57347dc2b50294b059db2750e6 | code | 3909 | using Test, RetentionParameterEstimator, GasChromatographySimulator
L = 30.0
d = 0.25e-3
df = 0.25e-6
sp = "SLB5ms"
gas = "He"
TP = [40.0, 3.0, 15.0, 340.0, 5.0]
PP = [150000.0, 3.0, 5000.0, 250000.0, 5.0]
solutes = ["Decane", "2-Octanol", "Pentadecane"]
db_path = "./data"
db_file = "Database_SLB5ms.csv"
pout = "vacuum"
time_unit = "min"
# Simulation_Test.jl
par = RetentionParameterEstimator.conventional_GC(L, d, df, sp, gas, TP, PP, solutes, db_path, db_file; pout=pout, time_unit=time_unit)
@test par.prog.time_steps[2] == 3.0*60.0
tR_sim, par_sim = RetentionParameterEstimator.sim_test_chrom(L, d, df, sp, gas, [TP, TP], [PP, PP], solutes, db_path, db_file; pout=pout, time_unit=time_unit)
@test tR_sim[1,1] == tR_sim[2,1]
# more meaningful test?
# compare GasChromatographySimulator.simulate() and RetentionParameterEstimator.tR_calc()
# pl, sol = GasChromatographySimulator.simulate(par)
# Tchar = par.sub[3].Tchar
# θchar = par.sub[3].θchar
# ΔCp = par.sub[3].ΔCp
# opt = par.opt
##tR = RetentionParameterEstimator.tR_calc(Tchar, θchar, ΔCp, df/d, L, d, df, par.prog, gas)
##pl.tR[3] == tR # false, because of odesys=true for GasChromatographySimulator
# and in RetentionParameterEstimator only the migration ODE is used
##@test isapprox(pl.tR[3], tR, atol=1e-4)
#opt_ = GasChromatographySimulator.Options(ng=true)
#par_ = GasChromatographySimulator.Parameters(par.col, par.prog, par.sub, opt_)
#pl_, sol_ = GasChromatographySimulator.simulate(par_)
#opt__ = GasChromatographySimulator.Options(ng=true, odesys=false)
#par__ = GasChromatographySimulator.Parameters(par.col, par.prog, par.sub, opt__)
#pl__, sol__ = GasChromatographySimulator.simulate(par__)
#opt___ = GasChromatographySimulator.Options(ng=false, odesys=false)
#par___ = GasChromatographySimulator.Parameters(par.col, par.prog, par.sub, opt___)
#pl___, sol___ = GasChromatographySimulator.simulate(par___)
#[pl.tR[3], pl_.tR[3], pl__.tR[3], pl___.tR[3]].-tR
# lowest difference for opt__ and opt___ to tR ≈ 1e-5
# Load.jl
file = "./data/meas_test.csv"
meas = RetentionParameterEstimator.load_chromatograms(file; filter_missing=true)
@test meas[4][1] == "2-Octanone"
meas_select = RetentionParameterEstimator.filter_selected_measurements(meas, ["meas3", "meas4", "meas5"], ["2-Octanone"]);
file = "./data/meas_test_2.csv" # from Email from [email protected] -> Issue #29
meas_ = RetentionParameterEstimator.load_chromatograms(file; filter_missing=true)
@test meas_[2][1].Fpin_steps[3] == meas_[2][1].Fpin_steps[3]
# Loss.jl
# Estimate_Start_Values.jl
# Optimization.jl
#df, sol = RetentionParameterEstimator.estimate_parameters(meas; mode="Kcentric_single")
#@test isapprox(df.Tchar[1], 400.15; atol = 0.01)
#@test isapprox(sol[2].minimum, 0.009; atol = 0.0001)
col_input = (L = meas_select[1].L, d = meas_select[1].d*1000)
check, msg, df_flag, index_flag, res, Telu_max = RetentionParameterEstimator.check_measurement(meas_select, col_input; min_th=0.1, loss_th=1.0, se_col=false)
@test check == true
@test isapprox(res.Tchar[1], 400.0; atol = 1.0)
#@test isapprox(res.min[2], 0.009; atol = 0.001)
#col_input_ = (L = meas[1].L, d = meas[1].d*1000*1.1)
#check_, msg_, df_flag_, index_flag_, res_, Telu_max_ = RetentionParameterEstimator.check_measurement(meas, col_input_; min_th=0.1, loss_th=1.0)
#@test msg_ == "discrapancy of retention times detected"
#@test isapprox(res_.Tchar[1], 415.5; atol = 0.1)
#@test isapprox(res_.min[2], 0.21; atol = 0.01)
res_m1, Telu_max_m1 = RetentionParameterEstimator.method_m1(meas_select, col_input, se_col=false)
@test res_m1.Tchar == res.Tchar
@test Telu_max_m1 == Telu_max
res_m2, Telu_max_m2 = RetentionParameterEstimator.method_m2(meas_select, se_col=true)
@test isapprox(res_m2.d[1], 0.00024, atol=0.00001)
# ToDo:
# add test for missing measuremet values | RetentionParameterEstimator | https://github.com/GasChromatographyToolbox/RetentionParameterEstimator.jl.git |
|
[
"MIT"
] | 0.1.10 | 406bb881a8d4ac57347dc2b50294b059db2750e6 | docs | 4617 | # RetentionParameterEstimator.jl
[](https://doi.org/10.1016/j.chroma.2023.464008)
[](https://zenodo.org/badge/latestdoi/550339258)
[](https://GasChromatographyToolbox.github.io/RetentionParameterEstimator.jl/stable)
[](https://GasChromatographyToolbox.github.io/RetentionParameterEstimator.jl/dev)
[](https://github.com/GasChromatographyToolbox/RetentionParameterEstimator.jl/actions/workflows/ci.yml)
[](http://codecov.io/github/GasChromatographyToolbox/RetentionParameterEstimator.jl?branch=main)
Estimation of retention parameters for the interaction of analytes with a stationary phase in Gas Chromatography (GC).
The retention parameters are estimated from a set of temperature programmed GC runs. The GC simulation ['GasChromatographySimulator.jl'](https://github.com/GasChromatographyToolbox/GasChromatographySimulator.jl) is used to compute the retention times with several sets of estimated retention parameters and compare these computed retention times with the measured retention times. An optimization process is used to minimize the difference between computed and measured retention times. The retention parameters resulting in this minimized difference are the final result. In addition it is also possible to estimate the column diameter _d_.
## Installation
To install the package type:
```julia
julia> ] add RetentionParameterEstimator
```
To use the package type:
```julia
julia> using RetentionParameterEstimator
```
## Documentation
Please read the [documentation page](https://GasChromatographyToolbox.github.io/RetentionParameterEstimator.jl/stable/) for more information.
## Notebooks
In the folder [notebooks](https://github.com/GasChromatographyToolbox/RetentionParameterEstimator/tree/main/notebooks) notebooks, using [Pluto.jl](https://github.com/fonsp/Pluto.jl), for the estimation of retention parameters from temperature programmed GC measurements are available.
To use these notebooks [Julia, v1.6 or above,](https://julialang.org/downloads/#current_stable_release) must be installed and **Pluto** must be added:
```julia
julia> ]
(v1.7) pkg> add Pluto
```
To run Pluto, use the following commands:
```julia
julia> using Pluto
julia> Pluto.run()
```
Pluto will open your browser. In the field `Open from file` the URL of a notebook or the path to a locally downloaded notebook can be insert and the notebook will open and load the necessary packages.
### Overview of notebooks
- `estimate_retention_parameters.jl` (https://github.com/GasChromatographyToolbox/RetentionParameterEstimator.jl/blob/main/notebooks/estimate_retention_parameters.jl):
- Notebook to estimate the retention parameters and optional the column diameter from measured chromatograms
- a standard error for the estimated parameters is given
- comparison of other measured chromatograms with predicted chromatograms using the found optimal parameters
- plot of the resulting retention factor ``k`` over temperature ``T``, a comparing plot with other data (e.g. isothermal measured) is possible
- in default, the notebook uses the data from the publication
## Contribution
Please open an issue if you:
- want to report a bug
- have problems using the package (please first look at the documentation)
- have ideas for new features or ways to improve the usage of this package
You can contribute (e.g. fix bugs, add new features, add to the documentation) to this package by Pull Request:
- first discuss your contributions in a new issue
- ensure that all tests pass locally before starting the pull request
- new features should be included in `runtests.jl`
- add description to the pull request, link to corresponding issues by `#` and issue number
- the pull request will be reviewed
## Citation
```
@article{Leppert2023,
author = {Leppert, Jan and Brehmer, Tillman and Wüst, Matthias and Boeker, Peter},
title = {Estimation of retention parameters from temperature programmed gas chromatography},
journal = {Journal of Chromatography A},
month = jun,
year = {2023},
volume = {1699},
pages = {464008},
issn = {00219673},
doi = {10.1016/j.chroma.2023.464008},
}
```
| RetentionParameterEstimator | https://github.com/GasChromatographyToolbox/RetentionParameterEstimator.jl.git |
|
[
"MIT"
] | 0.1.10 | 406bb881a8d4ac57347dc2b50294b059db2750e6 | docs | 240 | ## Module Index
```@index
Modules = [RetentionParameterEstimator]
Order = [:constant, :type, :function, :macro]
```
## Detailed API
```@autodocs
Modules = [RetentionParameterEstimator]
Order = [:constant, :type, :function, :macro]
``` | RetentionParameterEstimator | https://github.com/GasChromatographyToolbox/RetentionParameterEstimator.jl.git |
|
[
"MIT"
] | 0.1.10 | 406bb881a8d4ac57347dc2b50294b059db2750e6 | docs | 192 | # RetentionParameterEstimator.jl
Documentation for RetentionParameterEstimator.jl
[RetentionParameterEstimator.jl](https://github.com/GasChromatographyToolbox/RetentionParameterEstimator.jl) | RetentionParameterEstimator | https://github.com/GasChromatographyToolbox/RetentionParameterEstimator.jl.git |
|
[
"MIT"
] | 0.2.0 | 9c7c79fd1a7bfe329908fd80360bb4fb9c4a07a2 | code | 756 | module SimplicialSets
using StructEqualHash, LinearCombinations
using LinearCombinations: Sign, ONE, signed, Zero, sum0, return_type, @Function, unval
using LinearCombinations: coefftype as coeff_type
import LinearCombinations: linear_filter, deg, diff, coprod, hastrait
using Base: Fix1, Fix2, @propagate_inbounds
import Base: show, ==, hash, copy, one, isone, *, /, ^, inv,
length, firstindex, lastindex, getindex, setindex!
include("abstract.jl")
include("product.jl")
include("opposite.jl")
include("interval.jl")
include("suspension.jl")
include("symbolic.jl")
include("loopgroup.jl")
include("bar.jl")
include("groups.jl")
include("twistedproduct.jl")
# include("szczarba.jl")
include("ez.jl")
include("surj.jl")
include("helpers.jl")
end
| SimplicialSets | https://github.com/matthias314/SimplicialSets.jl.git |
|
[
"MIT"
] | 0.2.0 | 9c7c79fd1a7bfe329908fd80360bb4fb9c4a07a2 | code | 3437 | #
# Interval
#
const Interval = Union{Tuple{Integer,Integer}, Pair{<:Integer,<:Integer}, UnitRange{<:Integer}}
interval_length(k::Interval) = last(k)-first(k)+1
#
# AbstractSimplex datatype
#
export AbstractSimplex, isdegenerate
abstract type AbstractSimplex end
@linear_broadcastable AbstractSimplex
# for a new concrete subtype NewSimplex of AbstractSimplex, at least
# the following methods must be defined:
# dim(::NewSimplex)
# d(:NewSimplex, ::Int)
# s(:NewSimplex, ::Int)
function d!(x::AbstractSimplex, kk::AbstractVector{<:Integer})
for k in reverse(kk)
d!(x, k)
end
x
end
@generated function d(x::T, kk) where T <: AbstractSimplex
if hasmethod(d!, (T, Int))
:(d!(copy(x), kk))
else
:(foldl(d, reverse(kk), init = x))
end
end
function s!(x::AbstractSimplex, kk::AbstractVector{<:Integer})
for k in kk
s!(x, k)
end
x
end
@generated function s(x::T, kk) where T <: AbstractSimplex
if hasmethod(s!, (T, Int))
:(s!(copy(x), kk))
else
quote
for k in kk
x = s(x, k)
end
x
end
end
end
#=
function r!(x::AbstractSimplex, kk)
l = dim(x)
for k in reverse(kk)
if !(0 <= k <= l)
error("index outside the allowed range 0:$l")
end
d!(x, k+1:l)
l = k-1
end
d!(x, 0:l)
end
=#
function r(x::T, kk) where T <: AbstractSimplex
# kk is a tuple/vector/iterator of Tuple{Integer,Integer}
isempty(kk) && error("at least one interval must be given")
kk[end] isa Interval || error("second argument must contain integer tuples, pairs or unit ranges")
y = x
for k in dim(x):-1:last(kk[end])+1
y = d(y, k)
end
for i in length(kk)-1:-1:1
kk[i] isa Interval || error("second argument must contain integer tuples, pairs or unit ranges")
kki1 = first(kk[i+1])
kki = last(kk[i])
if kki == kki1
y = s(y, kki)
else
for k in kki1-1:-1:kki+1
y = d(y, k)
end
end
end
for k in first(kk[1])-1:-1:0
y = d(y, k)
end
y
end
function isdegenerate(x::AbstractSimplex, k::Integer)
@boundscheck if k < 0 || k >= dim(x)
error("index outside the allowed range 0:$(dim(x)-1)")
end
@inbounds x == s(d(x, k), k)
end
@propagate_inbounds isdegenerate(x::AbstractSimplex, k0::Integer, k1::Integer) = any(k -> isdegenerate(x, k), k0:k1-1)
isdegenerate(x::AbstractSimplex) = @inbounds isdegenerate(x, 0, dim(x))
linear_filter(x::AbstractSimplex) = !isdegenerate(x)
deg(x::AbstractSimplex) = dim(x)
@linear_kw function diff(x::T;
coefftype = Int,
addto = zero(Linear{T,unval(coefftype)}),
coeff = ONE,
is_filtered = false) where T <: AbstractSimplex
if iszero(coeff)
return addto
end
n = dim(x)
if n != 0
for k in 0:n
addmul!(addto, d(x, k), signed(k, coeff))
end
end
addto
end
function ^(g::AbstractSimplex, n::Integer)
if n >= 32
# square-and-multiply
s = g
m = one(g)
while n != 0
if isodd(n)
m *= s
end
s *= s
n >>= 1
end
m
elseif n > 0
*(ntuple(Returns(g), n)...)
elseif n == 0
one(g)
else
inv(g)^(-n)
end
end
| SimplicialSets | https://github.com/matthias314/SimplicialSets.jl.git |
|
[
"MIT"
] | 0.2.0 | 9c7c79fd1a7bfe329908fd80360bb4fb9c4a07a2 | code | 4179 | #
# simplicial bar construction
#
export BarSimplex
import Base: *, /, ^, one, isone, inv
struct BarSimplex{T} <: AbstractSimplex
g::Vector{T}
end
function BarSimplex(g::Vector{T}) where T <: AbstractSimplex
@boundscheck all(enumerate(g)) do (i, x)
dim(x) == i-1
end || error("simplices have incorrect dimensions", g)
BarSimplex{T}(g)
end
@propagate_inbounds function BarSimplex(iter; op::Union{typeof(*),typeof(+)} = *)
if op isa typeof(+)
BarSimplex(AddToMul.(iter))
# todo: this leads to additional allocations
else
BarSimplex(collect(iter))
end
end
show(io::IO, x::BarSimplex) = print(io, '[', join(x.g, ','), ']')
@struct_equal_hash BarSimplex{T} where T
copy(x::BarSimplex{T}) where T = BarSimplex{T}(copy(x.g))
length(x::BarSimplex) = length(x.g)
iterate(x::BarSimplex, state...) = iterate(x.g, state...)
dim(x::BarSimplex) = length(x)
# note: multiplication of bar simplices only makes sense for commutative groups
one(x::BarSimplex, n::Integer = dim(x)) = one(typeof(x), n)
isone(x::BarSimplex) = all(isone, x.g)
inv(x::BarSimplex{T}) where T = BarSimplex(inv.(x.g))
function *(x::BarSimplex{T}, ys::BarSimplex{T}...) where T
@boundscheck all(==(dim(x)) ∘ dim, ys) || error("illegal arguments")
BarSimplex(.*(x.g, map(y -> y.g, ys)...))
end
function ^(x::BarSimplex, n::Integer)
BarSimplex(x.g .^ n)
end
function /(x::BarSimplex{T}, y::BarSimplex{T}) where T
@boundscheck dim(x) == dim(y) || error("illegal arguments")
BarSimplex(x.g ./ y.g)
# BarSimplex(map(splat(/), zip(x.g, y.g)))
end
#
# bar construction for discrete groups
#
function d(x::BarSimplex{T}, k::Integer) where T
@boundscheck if k < 0 || k > (n = dim(x)) || n == 0
error("illegal arguments")
end
g = x.g
@inbounds gg = T[if i < k
g[i]
elseif i == k
g[i]*g[i+1]
else
g[i+1]
end
for i in 1:length(g)-1]
BarSimplex(gg)
end
function s(x::BarSimplex{T}, k::Integer) where T
@boundscheck if k < 0 || k > dim(x)
error("illegal arguments")
end
g = x.g
k += 1
@inbounds gg = T[if i < k
g[i]
elseif i == k
one(T)
else
g[i-1]
end
for i in 1:length(g)+1]
BarSimplex(gg)
end
@inline function isdegenerate(x::BarSimplex, k::Integer)
@boundscheck if k < 0 || k >= dim(x)
error("illegal arguments")
end
@inbounds isone(x.g[k+1])
end
function one(::Type{BarSimplex{T}}, n::Integer = 0) where T
n >= 0 || error("dimension must be non-negative")
BarSimplex(T[one(T) for k in 0:n-1])
end
#
# bar construction for simplicial groups
#
export twf_bar
function d(x::BarSimplex{T}, k::Integer) where T <: AbstractSimplex
@boundscheck if k < 0 || k > (n = dim(x)) || n == 0
error("illegal arguments")
end
g = x.g
@inbounds gg = T[if i < k
g[i]
elseif i == k
g[i]*d(g[i+1], 0)
else
d(g[i+1], i-k)
end
for i in 1:length(g)-1]
@inbounds BarSimplex(gg)
end
function s(x::BarSimplex{T}, k::Integer) where T <: AbstractSimplex
@boundscheck if k < 0 || k > dim(x)
error("illegal arguments")
end
g = x.g
k += 1
@inbounds gg = [if i < k
g[i]
elseif i == k
one(T, k-1)
else
s(g[i-1], i-k-1)
end
for i in 1:length(g)+1]
@inbounds BarSimplex(gg)
end
@inline function isdegenerate(x::BarSimplex{T}, k::Integer) where T <: AbstractSimplex
@boundscheck if k < 0 || k >= dim(x)
error("illegal arguments")
end
g = x.g
@inbounds isone(g[k+1]) && all(i -> isdegenerate(g[i], i-k-2), k+2:dim(x))
end
function one(::Type{BarSimplex{T}}, n::Integer = 0) where T <: AbstractSimplex
n >= 0 || error("dimension must be non-negative")
BarSimplex(T[one(T, k) for k in 0:n-1])
end
# twisting function
function twf_bar(x::BarSimplex{T}) where T <: AbstractSimplex
if deg(x) == 0
error("twisting function not defined for 0-simplices")
else
return x.g[end]
end
end
| SimplicialSets | https://github.com/matthias314/SimplicialSets.jl.git |
|
[
"MIT"
] | 0.2.0 | 9c7c79fd1a7bfe329908fd80360bb4fb9c4a07a2 | code | 7192 | #
# Eilenberg-Zilber maps
#
export ez, aw, shih_opp, shih_eml, shih
#
# shuffle map
#
multinomial() = 1
multinomial(k1) = 1
multinomial(k1, ks...) = binomial(k1+sum(ks)::Int, k1)*multinomial(ks...)
# multinomial(k::Int...) = length(k) <= 1 ? 1 : binomial(sum(k)::Int, k[1])*multinomial(k[2:end]...)
function foreach_shuffle_simplex(f, p::Int, q::Int, x0::S, y0::T, m::Int = 0;
xv = Vector{S}(undef, q+1),
yv = Vector{T}(undef, q+1),
kv = Vector{Int}(undef, q+1),
sv = Vector{Int}(undef, q+1)) where {S <: AbstractSimplex, T <: AbstractSimplex}
i = 1
xv[1] = x0
yv[1] = y0
kv[1] = 0
sv[1] = p*q
@inbounds while i != 0
if i <= q && kv[i] < p+i
kv[i+1] = kv[i]+1
xv[i+1] = s(xv[i] , kv[i]+m)
yv[i+1] = yv[i]
sv[i+1] = sv[i]
i += 1
else
if i > q
# this means i == q+1
yy = yv[q+1]
for kk in kv[q+1]:p+q-1
yy = s(yy, kk+m)
end
f(xv[q+1], yy, sv[q+1])
end
i -= 1
if i != 0
yv[i] = s(yv[i], kv[i]+m) # TODO: this fails for SymbolicSimplex in dim p+q close to 24
kv[i] += 1
sv[i] -= q+1-i
end
end
end
nothing
end
_ez(f, addto, coeff, x) = addmul!(addto, x, coeff; is_filtered = true)
function _ez(f::F, addto, coeff, x::S, y::T, z...) where {F,S,T}
p = dim(x)
q = dim(y)
# stack for foreach_shuffle_simplex
# this is not necessary, but avoids allocations
xv = Vector{S}(undef, q+1)
yv = Vector{T}(undef, q+1)
kv = Vector{Int}(undef, q+1)
sv = Vector{Int}(undef, q+1)
# foreach_shuffle_simplex(p, q, x, y) do xx, yy, ss
foreach_shuffle_simplex(p, q, x, y; xv, yv, kv, sv) do xx, yy, ss
_ez(f, addto, signed(ss, coeff), f(xx, yy), z...)
end
end
ez_prod() = ProductSimplex(; dim = 0)
ez_prod(x) = ProductSimplex(x)
ez_prod(x, y) = ProductSimplex(x..., y)
_ez_term_type(f, P) = P
_ez_term_type(f, P, T, U...) = _ez_term_type(f, return_type(f, P, T), U...)
ez_term_type(f) = return_type(f)
ez_term_type(f, T) = return_type(f, T)
ez_term_type(f, T, U...) = _ez_term_type(f, ez_term_type(f, T), U...)
@linear_kw function ez(xs::AbstractSimplex...;
f = ez_prod,
coefftype = Int,
addto = zero(Linear{ez_term_type(f, typeof.(xs)...),unval(coefftype)}),
coeff = one(coeff_type(addto)),
sizehint = true,
is_filtered = false)
isempty(xs) && return addmul!(addto, f(), coeff; is_filtered = true)
is_filtered || all(!isdegenerate, xs) || return addto
sizehint && sizehint!(addto, length(addto) + multinomial(map(dim, xs)...))
x1, xr... = xs
_ez(f, addto, coeff, f(x1), xr...)
addto
end
ez(t::AbstractTensor; kw...) = ez(t...; kw...)
hastrait(::typeof(ez), prop::Val, ::Type{<:AbstractTensor{T}}) where T <: Tuple = hastrait(ez, prop, fieldtypes(T)...)
@multilinear ez
deg(::typeof(ez)) = Zero()
keeps_filtered(::typeof(ez), ::Type) = true
#
# Alexander-Whitney map
#
_aw(addto, coeff, ::Tuple{}) = addmul!(addto, Tensor(), coeff)
function _aw(addto, coeff, x::Tuple{AbstractSimplex}, z...)
isdegenerate(x[1]) || addmul!(addto, Tensor(x[1], z...), coeff; is_filtered = true)
end
@inline function _aw(addto, coeff, x::Tuple, z...)
n = dim(x[end])
for k in 0:n
y = r(x[end], ((k, n),))
isdegenerate(y) || _aw(addto, coeff, map(Fix2(r, ((0, k),)), x[1:end-1]), y, z...)
end
end
@linear_kw function aw(x::ProductSimplex{T};
coefftype = Int,
addto = zero(Linear{Tensor{T},unval(coefftype)}),
coeff = ONE,
is_filtered = false) where T <: Tuple{Vararg{AbstractSimplex}}
if !iszero(coeff) && (is_filtered || !isdegenerate(x))
_aw(addto, coeff, components(x))
end
addto
end
@linear aw
deg(::typeof(aw)) = Zero()
#
# homotopy
#
@linear_kw function shih_opp(z::ProductSimplex{Tuple{S, T}};
coefftype = Int,
addto = zero(Linear{ProductSimplex{Tuple{S,T}},unval(coefftype)}),
coeff = ONE,
sizehint = true,
is_filtered = false) where {S <: AbstractSimplex, T <: AbstractSimplex}
iszero(coeff) && return addto
n = dim(z)
sizehint && sizehint!(addto, length(addto)+(1<<(n+1))-n-2)
x, y = components(z)
# stack for foreach_shuffle_simplex
xv = Vector{S}(undef, n+1)
yv = Vector{T}(undef, n+1)
kv = Vector{Int}(undef, n+1)
sv = Vector{Int}(undef, n+1)
for p in 0:n-1, q in 0:n-1-p
xx = r(x, ((0, p), (p+q+1, n)))
yy = r(y, ((p, n),))
# if nondeg || !(isdegenerate(xx, 0, p+1) || isdegenerate(yy, 0, q+1))
if !(isdegenerate(xx, 0, p+1) || isdegenerate(yy, 0, q+1))
foreach_shuffle_simplex(p, q+1, xx, yy; xv, yv, kv, sv) do xxx, yyy, sss
@inbounds addmul!(addto, ProductSimplex(xxx, s(yyy, p+q+1); dim = n+1), signed(p+q+sss, coeff); is_filtered = true)
end
end
end
addto
end
@linear shih_opp
deg(::typeof(shih_opp)) = 1
@linear_kw function shih_eml(z::ProductSimplex{Tuple{S, T}};
coefftype = Int,
addto = zero(Linear{ProductSimplex{Tuple{S,T}},unval(coefftype)}),
coeff = ONE,
sizehint = true,
is_filtered = false) where {S <: AbstractSimplex, T <: AbstractSimplex}
iszero(coeff) && return addto
n = dim(z)
sizehint && sizehint!(addto, length(addto)+(1<<(n+1))-n-2)
x, y = components(z)
# stack for foreach_shuffle_simplex
xv = Vector{S}(undef, n)
yv = Vector{T}(undef, n)
kv = Vector{Int}(undef, n)
sv = Vector{Int}(undef, n)
for p in 0:n-1, q in 0:n-1-p
m = n-1-p-q
xx = r(x, ((0, n-q),))
yy = r(y, ((0, m), (n-q, n)))
# if nondeg || !(isdegenerate(xx, m, m+p+1) || isdegenerate(yy, m, m+q+1))
if !(isdegenerate(xx, m, m+p+1) || isdegenerate(yy, m, m+q+1))
foreach_shuffle_simplex(p+1, q, s(xx, m), yy, m+1; xv, yv, kv, sv) do xxx, yyy, sss
@inbounds addmul!(addto, ProductSimplex(xxx, yyy; dim = n+1), signed(m+sss, coeff); is_filtered = true)
end
end
end
addto
end
@linear shih_eml
deg(::typeof(shih_eml)) = 1
# setting the default version
const shih = shih_eml
#
# group operations on chains
#
import Base: *, inv
@linear inv
keeps_filtered(::typeof(inv), ::Type) = true
_mul(addto, coeff, t) = ez(Tensor(t); f = *, addto, coeff)
function _mul(addto, coeff, t, a, b...)
for (x, c) in a
_mul(addto, coeff*c, (t..., x), b...)
end
end
function *(a::AbstractLinear{<:AbstractSimplex}...)
# TODO: add addto & coeff
R = promote_type(map(coefftype, a)...)
T = return_type(*, map(termtype, a)...)
addto = zero(Linear{T,R})
l = prod(map(length, a))
l == 0 && return addto
_mul(addto, ONE, (), a...)
addto
end
#
# diagonal map and coproduct
#
export diag
diag(x::AbstractSimplex) = @inbounds ProductSimplex(x, x)
@linear diag
keeps_filtered(::typeof(diag), ::Type) = true
coprod(x::AbstractSimplex; kw...) = aw(diag(x); kw...)
| SimplicialSets | https://github.com/matthias314/SimplicialSets.jl.git |
|
[
"MIT"
] | 0.2.0 | 9c7c79fd1a7bfe329908fd80360bb4fb9c4a07a2 | code | 1183 | #
# group stuff
#
export AddToMul, Lattice
import Base: *, /, ^, +, -, zero, iszero, one, isone
#
# AddToMul
#
struct AddToMul{T}
x::T
end
show(io::IO, a::AddToMul) = print(io, a.x)
one(::Type{AddToMul{T}}) where T = AddToMul(zero(T))
one(a::AddToMul) = AddToMul(zero(a.x))
isone(a::AddToMul) = iszero(a.x)
*(a::AddToMul{T}...) where T = AddToMul(mapreduce(b -> b.x, +, a))
/(a::AddToMul{T}, b::AddToMul{T}) where T = AddToMul(a.x-b.x)
inv(a::AddToMul) = AddToMul(-a.x)
^(a::AddToMul, n::Integer) = AddToMul(n * a.x)
#
# Lattice
#
struct Lattice{N}
v::NTuple{N, Int}
end
Lattice(ii::Int...) = Lattice(ii)
show(io::IO, g::Lattice) = print(io, g.v)
length(::Lattice{N}) where N = N
iterate(g::Lattice, state...) = iterate(g.v, state...)
zero(::Type{Lattice{N}}) where N = Lattice(ntuple(Returns(0), N))
zero(::T) where T <: Lattice = zero(T)
# this doesn't seem to be faster than the default g.v == zero(g.v)
iszero(g::Lattice) = all(iszero, g.v)
+(g::Lattice{N}...) where N = Lattice(map(+, map(h -> h.v, g)...))
-(g::Lattice) = Lattice(.- g.v)
-(g::Lattice{N}, h::Lattice{N}) where N = Lattice(g.v .- h.v)
*(n::Integer, g::Lattice) = Lattice(n .* g.v)
| SimplicialSets | https://github.com/matthias314/SimplicialSets.jl.git |
|
[
"MIT"
] | 0.2.0 | 9c7c79fd1a7bfe329908fd80360bb4fb9c4a07a2 | code | 1419 | module TestHelpers
using ..SimplicialSets
using StructEqualHash: @struct_equal_hash as @struct_equal_hash
using LinearCombinations
using SimplicialSets: d, s
export BasicSimplex, undo_basic
#
# BasicSimplex
#
# used to test that functions only use the basic operations dim, d, s
struct BasicSimplex{T<:AbstractSimplex} <: AbstractSimplex
x::T
end
# Base.:(==)(y::BasicSimplex, z::BasicSimplex) = y.x == z.x
# Base.hash(y::BasicSimplex, h::UInt) = hash(y.x, h)
@struct_equal_hash BasicSimplex{T} where T
Base.copy(y::BasicSimplex) = BasicSimplex(copy(y.x))
SimplicialSets.dim(y::BasicSimplex) = dim(y.x)
SimplicialSets.d(y::BasicSimplex, k::Integer) = BasicSimplex(d(y.x, k))
SimplicialSets.s(y::BasicSimplex, k::Integer) = BasicSimplex(s(y.x, k))
Base.:*(ys::BasicSimplex...) = BasicSimplex(*((y.x for y in ys)...))
Base.:/(y::BasicSimplex, z::BasicSimplex) = BasicSimplex(y.x/z.x)
Base.inv(y::BasicSimplex) = BasicSimplex(inv(y.x))
Base.one(::Type{BasicSimplex{T}}, n...) where T = BasicSimplex(one(T, n...))
Base.one(y::BasicSimplex, n...) = BasicSimplex(one(y.x, n...))
undo_basic(x::AbstractSimplex) = x
undo_basic(y::BasicSimplex) = y.x
undo_basic(x::ProductSimplex) = ProductSimplex((undo_basic(y) for y in x)...)
undo_basic(x::AbstractTensor) = Tensor((undo_basic(y) for y in x)...)
# undo_basic(a::Linear) = Linear(undo_basic(x) => c for (x, c) in a)
@linear undo_basic
end # module TestHelpers
| SimplicialSets | https://github.com/matthias314/SimplicialSets.jl.git |
|
[
"MIT"
] | 0.2.0 | 9c7c79fd1a7bfe329908fd80360bb4fb9c4a07a2 | code | 1062 | #
# simplicial interval
#
export IntervalSimplex
struct IntervalSimplex <: AbstractSimplex
p::Int
q::Int
function IntervalSimplex(p::Int, q::Int)
if p >= 0 && q >= 0 && p+q != 0
new(p, q)
else
error("illegal arguments")
end
end
end
IntervalSimplex() = IntervalSimplex(1,1)
@struct_equal_hash IntervalSimplex
dim(x::IntervalSimplex) = x.p + x.q - 1
function d(x::IntervalSimplex, k::Int)
p, q = x.p, x.q
if 0 <= k < p
IntervalSimplex(p-1, q)
elseif 0 <= k < p+q
IntervalSimplex(p, q-1)
else
error("illegal arguments")
end
end
function s(x::IntervalSimplex, k::Int)
p, q = x.p, x.q
if 0 <= k < p
IntervalSimplex(p+1, q)
elseif 0 <= k < p+q
IntervalSimplex(p, q+1)
else
error("illegal arguments")
end
end
isdegenerate(x::IntervalSimplex, k::Integer) = k != x.p-1
isdegenerate(x::IntervalSimplex) = x.p > 1 || x.q > 1
# not exported at the moment
isabovevertex(x::IntervalSimplex) = x.p == 0 || x.q == 0
| SimplicialSets | https://github.com/matthias314/SimplicialSets.jl.git |
|
[
"MIT"
] | 0.2.0 | 9c7c79fd1a7bfe329908fd80360bb4fb9c4a07a2 | code | 3880 | #
# loop group
#
import Base: one, isone, inv, *, /, ^
# LoopGroupGenerator
struct LoopGroupGenerator{T <: AbstractSimplex}
gen::T
inv::Bool
end
function show(io::IO, u::LoopGroupGenerator)
show(io, u.gen)
if u.inv
# print(io, "^{-1}")
print(io, "⁻¹")
end
end
@struct_equal_hash LoopGroupGenerator{T} where T <: AbstractSimplex
dim(u::LoopGroupGenerator) = dim(u.gen)-1
@propagate_inbounds d(u::LoopGroupGenerator, k) = LoopGroupGenerator(d(u.gen, k), u.inv)
@propagate_inbounds s(u::LoopGroupGenerator, k) = LoopGroupGenerator(s(u.gen, k), u.inv)
@propagate_inbounds isdegenerate(u::LoopGroupGenerator, k) = isdegenerate(u.gen, k)
inv(u::LoopGroupGenerator) = LoopGroupGenerator(u.gen, !u.inv)
areinverse(u::LoopGroupGenerator{T}, v::LoopGroupGenerator{T}) where T = u.gen == v.gen && u.inv != v.inv
# LoopGroupSimplex
export LoopGroupSimplex, twf_loop
struct LoopGroupSimplex{T<:AbstractSimplex} <: AbstractSimplex
gens::Vector{LoopGroupGenerator{T}}
dim::Int
end
function LoopGroupSimplex(x::T) where T <: AbstractSimplex
n = dim(x)
if n == 0
error("illegal argument")
end
if isdegenerate(x, 0)
LoopGroupSimplex(LoopGroupGenerator{T}[], n-1)
else
LoopGroupSimplex([LoopGroupGenerator(x, false)], n-1)
end
end
function show(io::IO, g::LoopGroupSimplex)
print(io, '⟨')
join(io, g.gens, ',')
print(io, '⟩')
end
dim(g::LoopGroupSimplex) = g.dim
copy(g::LoopGroupSimplex) = LoopGroupSimplex(copy(g.gens), g.dim)
length(g::LoopGroupSimplex) = length(g.gens)
iterate(g::LoopGroupSimplex, state...) = iterate(g.gens, state...)
function one(::Type{LoopGroupSimplex{T}}, n::Integer = 0) where T <: AbstractSimplex
n >= 0 || error("dimension must be non-negative")
LoopGroupSimplex{T}(LoopGroupGenerator{T}[], n)
# we need the parameter in "LoopGroupSimplex{T}" to get the automatic conversion of n to Int
end
one(g::T, n = dim(g)) where T <: LoopGroupSimplex = one(T, n)
isone(g::LoopGroupSimplex) = length(g) == 0
@struct_equal_hash LoopGroupSimplex{T} where T <: AbstractSimplex
function append!(g::LoopGroupSimplex{T}, u::LoopGroupGenerator{T}, ::Val{nondeg0}) where {T, nondeg0}
gens = g.gens
@boundscheck if dim(g) != dim(u)
error("illegal arguments")
end
@inbounds if nondeg0 || !isdegenerate(u, 0)
if isempty(gens) || !areinverse(gens[end], u)
push!(gens, u)
else
pop!(gens)
end
end
g
end
function d(g::T, k::Integer) where T <: LoopGroupSimplex
n = dim(g)
m = n == 0 ? -1 : n
0 <= k <= m || error("index outside the allowed range 0:$m")
h = one(T, dim(g)-1)
sizehint!(h.gens, (k == 0 ? 2 : 1) * length(g))
@inbounds for u in g.gens
v = d(u, k+1)
if k != 0
append!(h, v, Val(false))
else
w = inv(d(u, 0))
append!(h, u.inv ? v : w, Val(false))
append!(h, u.inv ? w : v, Val(false))
end
end
h
end
function s(g::LoopGroupSimplex, k::Integer)
n = dim(g)
0 <= k <= n || error("index outside the allowed range 0:$n")
LoopGroupSimplex(map(u -> @inbounds(s(u, k+1)), g.gens), n+1)
end
function inv(g::LoopGroupSimplex)
gens = similar(g.gens)
for (k, u) in enumerate(g.gens)
@inbounds gens[end+1-k] = inv(u)
end
LoopGroupSimplex(gens, g.dim)
end
function mul!(g::LoopGroupSimplex{T}, hs::LoopGroupSimplex{T}...) where T <: AbstractSimplex
all(==(dim(g)) ∘ dim, hs) || error("illegal arguments")
sizehint!(g.gens, length(g)+sum(length, hs; init = 0))
for h in hs, v in h.gens
append!(g, v, Val(true))
end
g
end
*(g::LoopGroupSimplex{T}, hs::LoopGroupSimplex{T}...) where T <: AbstractSimplex = mul!(copy(g), hs...)
# twisting function
twf_loop(x::AbstractSimplex) = LoopGroupSimplex(x)
| SimplicialSets | https://github.com/matthias314/SimplicialSets.jl.git |
|
[
"MIT"
] | 0.2.0 | 9c7c79fd1a7bfe329908fd80360bb4fb9c4a07a2 | code | 1594 | #
# OppositeSimplex datatype
#
export OppositeSimplex, opposite
struct OppositeSimplex{T<:AbstractSimplex} <: AbstractSimplex
x::T
end
function show(io::IO, y::OppositeSimplex)
print(io, "OppositeSimplex(", repr(y.x), ')')
end
@struct_equal_hash OppositeSimplex{T} where T
copy(y::OppositeSimplex) = OppositeSimplex(copy(y.x))
dim(y::OppositeSimplex) = dim(y.x)
d(y::OppositeSimplex, k::Integer) = OppositeSimplex(d(y.x, dim(y)-k))
s(y::OppositeSimplex, k::Integer) = OppositeSimplex(s(y.x, dim(y)-k))
*(ys::OppositeSimplex...) = OppositeSimplex(*((y.x for y in ys)...))
/(y::OppositeSimplex, z::OppositeSimplex) = OppositeSimplex(y.x/z.x)
inv(y::OppositeSimplex) = OppositeSimplex(inv(y.x))
one(::Type{OppositeSimplex{T}}, n...) where T = OppositeSimplex(one(T, n...))
one(y::OppositeSimplex, n...) = OppositeSimplex(one(y.x, n...))
opposite(x::AbstractSimplex) = OppositeSimplex(x)
opposite(y::OppositeSimplex) = y.x
opposite(x::ProductSimplex) = ProductSimplex((opposite(y) for y in x)...)
opposite(x::AbstractTensor) = Tensor((opposite(y) for y in x)...)
q2(x::AbstractSimplex) = ifelse((dim(x)+1) & 2 == 0, 0, 1)
# this is 0 if dim(x) == 0 or 3 mod 4, and 1 if dim(x) == 1 or 2 mod 4
q2(t::AbstractTensor) = sum0(map(q2, Tuple(t)))
@linear_kw function opposite(a::AbstractLinear{T,R};
coefftype = R,
addto = zero(Linear{return_type(opposite, T),unval(coefftype)}),
coeff = ONE) where {T,R}
iszero(coeff) && return addto
for (x, c) in a
addmul!(addto, opposite(x), coeff*signed(q2(x), c); is_filtered = true)
end
addto
end
| SimplicialSets | https://github.com/matthias314/SimplicialSets.jl.git |
|
[
"MIT"
] | 0.2.0 | 9c7c79fd1a7bfe329908fd80360bb4fb9c4a07a2 | code | 3921 | #
# ProductSimplex datatype
#
export ProductSimplex, components
using Base: @__MODULE__ as @MODULE
import Base: length, iterate, convert
# struct ProductSimplex{T<:Tuple{Vararg{AbstractSimplex}}}
struct ProductSimplex{T<:Tuple} <: AbstractSimplex
xl::T
dim::Int
# we need `@propagate_inbounds` instead of `@inline`, see julia/#30411
@propagate_inbounds ProductSimplex{T}(xl::T; dim::Union{Integer,Missing} = missing) where T<:Tuple{Vararg{AbstractSimplex}} = begin
if dim === missing
if isempty(xl)
error("use 'dim' to specify the dimension of an empty product simplex")
else
dim = (@MODULE).dim(xl[1])
end
end
@boundscheck begin
dim >= 0 || error("dimension must be non-negative")
all(==(dim) ∘ (@MODULE).dim, xl) || error("dimensions of simplices do not match")
end
new{T}(xl, dim)
end
end
@propagate_inbounds ProductSimplex(xl::T; dim::Union{Integer,Missing} = missing) where T <: Tuple =
ProductSimplex{T}(xl; dim)
@propagate_inbounds ProductSimplex(x::AbstractSimplex...; kw...) = ProductSimplex(x; kw...)
function show(io::IO, x::ProductSimplex)
print(io, '(', join(map(repr, components(x)), ','), ')')
end
components(x::ProductSimplex) = x.xl
length(x::ProductSimplex) = length(components(x))
firstindex(x::ProductSimplex) = 1
lastindex(x::ProductSimplex) = length(x)
iterate(x::ProductSimplex, state...) = iterate(components(x), state...)
@propagate_inbounds getindex(x::ProductSimplex, k) = components(x)[k]
# copy(x::ProductSimplex) = ProductSimplex(copy(x.xl))
copy(x::ProductSimplex) = x
convert(::Type{P}, x::ProductSimplex) where P <: ProductSimplex = @inbounds P(components(x); dim = dim(x))
@struct_equal_hash ProductSimplex{T} where T
# @struct_equal_hash ProductSimplex
# TODO: should we take the tuple type T into account?
dim(x::ProductSimplex) = x.dim
@propagate_inbounds function d(x::ProductSimplex, k::Integer)
n = dim(x)
@boundscheck begin
m = n == 0 ? -1 : n
0 <= k <= m || error("index outside the allowed range 0:$m")
end
ProductSimplex(map(y -> @inbounds(d(y, k)), x.xl); dim = n-1)
end
@propagate_inbounds function s(x::ProductSimplex, k::Integer)
n = dim(x)
@boundscheck begin
0 <= k <= n || error("index outside the allowed range 0:$n")
end
ProductSimplex(map(y -> @inbounds(s(y, k)), x.xl); dim = n+1)
end
@propagate_inbounds function r(x::ProductSimplex, kk)
ProductSimplex(map(y -> r(y, kk), x.xl))
end
@propagate_inbounds function r(x::ProductSimplex{Tuple{}}, kk)
isempty(kk) && error("at least one interval must be given")
ProductSimplex((); dim = sum(map(interval_length, kk))-1)
end
@inline function isdegenerate(x::ProductSimplex, k::Integer)
@boundscheck if k < 0 || k >= dim(x)
error("index outside the allowed range 0:$(dim(x))")
end
@inbounds all(y -> isdegenerate(y, k), components(x))
end
# concatenating and flattening ProductSimplex
using LinearCombinations: _cat
import LinearCombinations: cat, flatten, _flatten
cat(x::ProductSimplex...) = ProductSimplex(_cat(x...); dim = dim(x[1]))
_flatten(x::ProductSimplex) = _cat(map(_flatten, components(x))...)
flatten(x::ProductSimplex) = ProductSimplex(_flatten(x); dim = dim(x))
#
# regrouping
#
using LinearCombinations: regroup_check_arg, regroup_eval_expr
import LinearCombinations: _length, _getindex
_length(::Type{<:ProductSimplex{T}}) where T <: Tuple = _length(T)
@propagate_inbounds _getindex(::Type{T}, i) where T <: ProductSimplex = _getindex(T.parameters[1], i)
function (rg::Regroup{A})(x::T) where {A,T<:ProductSimplex}
regroup_check_arg(ProductSimplex, typeof(A), T) ||
error("argument type $(typeof(x)) does not match first Regroup parameter $A")
@inbounds regroup_eval_expr(rg, _getindex, ProductSimplex, x)
end
| SimplicialSets | https://github.com/matthias314/SimplicialSets.jl.git |
|
[
"MIT"
] | 0.2.0 | 9c7c79fd1a7bfe329908fd80360bb4fb9c4a07a2 | code | 4434 | #
# surjections and interval cut operations
#
export Surjection, arity
struct Surjection{K} # K is the number of labels
u::Vector{Int} # the surjection proper
# u1::Vector{Int} # previous interval with same label (0 for first occurrence)
f::Vector{Bool} # final intervals
v::NTuple{K,Vector{Int}} # intervals for each i in 1:K
end
show(io::IO, surj::Surjection{K}) where K = print(io, "Surjection{$K}($(repr(surj.u)))")
==(surj1::Surjection, surj2::Surjection) = surj1.u == surj2.u
hash(surj::Surjection, h::UInt) = hash(surj.u, h)
deg(surj::Surjection{K}) where K = length(surj.u)-K
arity(surj::Surjection{K}) where K = K
is_surjection(k, u::AbstractVector{Int}) = extrema(u; init = (1, 0)) == (1, k) && all(Fix2(in, u), 2:k-1)
isdegenerate_surjection(u) = any(i -> @inbounds(u[i-1] == u[i]), 2:length(u))
isdegenerate(surj::Surjection) = isdegenerate_surjection(surj.u)
linear_filter(surj::Surjection) = !isdegenerate(surj)
# TODO: do we need to allow empty u? yes
function Surjection{k}(u; check = true) where k
!check || is_surjection(k, u) || error("argument is not a surjection onto 1:$k")
l = length(u)
v = ntuple(_ -> Int[], k) # we cannot use Returns
for i in 1:l
push!(v[u[i]], i)
end
# determine final intervals
f = fill(false, l)
for i in 1:k
f[v[i][end]] = true
end
Surjection{k}(u, f, v)
end
Surjection(u) = Surjection{maximum(u; init = 0)}(u)
@linear_kw function diff(surj::Surjection{k};
coefftype = Int,
addto = zero(Linear{Surjection{k},unval(coefftype)}),
coeff = ONE,
is_filtered = false) where k
(; u, f, v) = surj
s = 0
l = length(u)
us = Vector{Int}(undef, l)
@inbounds for i in 1:l
if !f[i]
si = s
s += 1
else
vi = v[u[i]]
length(vi) == 1 && continue
si = us[vi[end-1]] + 1
end
us[i] = si
if i == 1 || i == l || u[i-1] != u[i+1]
# this means that w below is non-degenerate
w = deleteat!(copy(u), i)
addmul!(addto, Surjection{k}(w; check = false), signed(si, coeff); is_filtered = true)
end
end
addto
end
# interval cuts
struct IC
n::Int
k::Int
end
length(a::IC) = binomial(a.n+a.k-1, a.k-1)
function iterate(a::IC)
ic = zeros(Int, a.k+1)
@inbounds ic[a.k+1] = a.n
ic, ic
end
@inline function iterate(a::IC, ic)
i = a.k
while i > 1 && @inbounds ic[i] == a.n
i -= 1
end
if i == 1
nothing
else
@inbounds m = ic[i]+1
for j in i:a.k
@inbounds ic[j] = m
end
ic, ic
end
end
@propagate_inbounds function isdegenerate_ic(v, ic)
for pp in v
i = -1
for p in pp
if p != 1 && ic[p] == i
return true
end
i = ic[p+1]
end
end
false
end
@linear_kw function (surj::Surjection{k})(x::T;
coefftype = Int,
addto = zero(Linear{Tensor{NTuple{k,T}},unval(coefftype)}),
coeff = one(coeff_type(addto)),
is_filtered = false) where {k, T <: AbstractSimplex}
iszero(coeff) && return addto
if k == 0
# in this case the interval cut operation is the augmentation map
deg(x) == 0 && addmul!(addto, Tensor(), coeff)
return addto
end
u = surj.u
f = surj.f
v = surj.v
l = length(u)
a = IC(dim(x), l)
sizehint!(addto, length(addto)+length(a))
rr = ntuple(i -> Vector{Tuple{Int,Int}}(undef, length(v[i])), k)
L = Vector{Int}(undef, l)
@inbounds for ic in a
isdegenerate_ic(v, ic) && continue
for i in 1:k, j in 1:length(v[i])
rr[i][j] = (ic[v[i][j]], ic[v[i][j]+1])
end
xl = map(Fix1(r, x), rr)
any(isdegenerate, xl) && continue
# TODO: incorporate non-deg testing into iterate
# compute permutation sign
s = sum(@inbounds ifelse(f[i], 0, ic[i+1]) for i in 1:l)
for i in 1:l
L[i] = Li = ic[i+1] - ic[i] + ifelse(f[i], 0, 1)
ui = u[i]
for j in 1:i-1
if ui < u[j]
s += Li*L[j]
end
end
end
addmul!(addto, Tensor(xl), signed(s, coeff); is_filtered = true)
end
addto
end
@linear surj::Surjection
| SimplicialSets | https://github.com/matthias314/SimplicialSets.jl.git |
|
[
"MIT"
] | 0.2.0 | 9c7c79fd1a7bfe329908fd80360bb4fb9c4a07a2 | code | 2085 | #
# simplicial suspension
#
export SuspensionSimplex
struct SuspensionSimplex{T <: AbstractSimplex} <: AbstractSimplex
i::IntervalSimplex
x::T
function SuspensionSimplex{T}(i::IntervalSimplex) where T <: AbstractSimplex
isabovevertex(i) ? new{T}(i) : error("illegal arguments")
end
function SuspensionSimplex(x::T, i::IntervalSimplex) where T <: AbstractSimplex
if dim(x) == dim(i)
isabovevertex(i) ? new{T}(i) : new{T}(i, x)
else
error("illegal arguments")
end
end
end
SuspensionSimplex(x::T, p::Int, q::Int = dim(x)+1-p) where T <: AbstractSimplex = SuspensionSimplex(x, IntervalSimplex(p, q))
SuspensionSimplex(x::ProductSimplex{Tuple{T, IntervalSimplex}}) where T <: AbstractSimplex = SuspensionSimplex(components(x)...)
function show(io::IO, x::SuspensionSimplex)
print(io, isabovevertex(x) ? "Σ($(x.i))" : "Σ($(x.x),$(x.i))")
end
dim(x::SuspensionSimplex) = dim(x.i)
isabovevertex(x::SuspensionSimplex) = isabovevertex(x.i)
function ==(x::SuspensionSimplex{T}, y::SuspensionSimplex{T}) where T <: AbstractSimplex
x.i == y.i && (isabovevertex(x) || x.x == y.x)
end
function hash(x::SuspensionSimplex, h::UInt)
h = hash(x.i, h)
if isabovevertex(x)
h
else
hash(x.x, h)
end
end
function d(x::SuspensionSimplex{T}, k::Integer) where T <: AbstractSimplex
if isabovevertex(x)
SuspensionSimplex{T}(d(x.i, k))
else
SuspensionSimplex(d(x.x, k), d(x.i, k))
end
end
function s(x::SuspensionSimplex{T}, k::Integer) where T <: AbstractSimplex
if isabovevertex(x)
SuspensionSimplex{T}(s(x.i, k))
else
SuspensionSimplex(s(x.x, k), s(x.i, k))
end
end
function isdegenerate(x::SuspensionSimplex, k::Integer)
if isabovevertex(x)
dim(x) > 0
else
isdegenerate(x.i, k) && isdegenerate(x.x, k)
end
end
function isdegenerate(x::SuspensionSimplex)
if isabovevertex(x)
dim(x) > 0
else
any(k -> isdegenerate(x.i, k) && isdegenerate(x.x, k), 0:dim(x)-1)
end
end
| SimplicialSets | https://github.com/matthias314/SimplicialSets.jl.git |
|
[
"MIT"
] | 0.2.0 | 9c7c79fd1a7bfe329908fd80360bb4fb9c4a07a2 | code | 3862 | #
# SymbolicSimplex datatype
#
export SymbolicSimplex, dim, vertices
# implementation using UInt128
const Label = Union{Symbol,Char}
struct SymbolicSimplex{L<:Label} <: AbstractSimplex
label::L
dim::Int
v::UInt128
function SymbolicSimplex(label::L, dim, v) where L <: Label
@boundscheck if dim > 24
error("SymbolicSimplex is limited to dimension at most 24")
end
new{L}(label, dim, v)
end
end
function SymbolicSimplex(label::Label, w::AbstractVector{<:Integer})
v = UInt128(0)
for k in reverse(w)
0 <= k < 32 || error("vertex numbers must be between 0 and 31")
v = 32*v+k
end
SymbolicSimplex(label, length(w)-1, v)
end
SymbolicSimplex(label::Label, n::Integer) = SymbolicSimplex(label, 0:n)
dim(x::SymbolicSimplex) = x.dim
# copy(x::SymbolicSimplex) = SymbolicSimplex(x.label, x.dim, x.v)
copy(x::SymbolicSimplex) = x
to_uint(x::Symbol) = objectid(x)
to_uint(x::Char) = UInt(1073741831)*UInt(x)
# hash(x::SymbolicSimplex, h::UInt) = hash(x.v, hash(256*x.dim+Int(x.label), h))
function Base.hash(x::SymbolicSimplex, h::UInt)
m = UInt(1073741827)*(x.dim % UInt) + to_uint(x.label)
# the constants are primes with product 0x1000000280000015
# which is greater than the 60 = 12*5 bits needed for 12 vertices
if x.dim <= 12
hash(x.v % UInt + m, h)
else
hash(x.v + m, h)
end
end
function Base.:(==)(x::SymbolicSimplex, y::SymbolicSimplex)
# x.label == y.label &&
x.dim == y.dim && x.v == y.v
end
function vertices(x::SymbolicSimplex)
d = x.v
m = dim(x)+1
s = Vector{Int}(undef, m)
for k in 1:m
d, r = divrem(d, UInt128(1) << 5)
s[k] = r
end
s
end
function show(io::IO, x::SymbolicSimplex)
print(io, x.label, '[')
join(io, vertices(x), ',')
print(io, ']')
end
const bitmask = [[UInt128(0)]; [UInt128(1) << (5*k) - UInt128(1) for k in 1:25]]
function d(x::SymbolicSimplex, k::Integer)
n = dim(x)
@boundscheck if k < 0 || k > n || n == 0
error("index outside the allowed range 0:$(n == 0 ? -1 : n)")
end
@inbounds w = (x.v & bitmask[k+1]) | (x.v & ~bitmask[k+2]) >> 5
@inbounds SymbolicSimplex(x.label, n-1, w)
end
# missing: d for kk a collections
function r(x::SymbolicSimplex, kk)
isempty(kk) && error("at least one interval must be given")
n = zero(first(kk[1]))
w = x.v & bitmask[last(kk[end])+2]
for i in length(kk)-1:-1:1
ka = last(kk[i])
kb = first(kk[i+1])
w = (w & bitmask[ka+2]) | ((w & ~bitmask[kb+1]) >> (5*(kb-ka-1)))
n += last(kk[i+1])-kb+1
end
w >>= 5*first(kk[1])
n += last(kk[1])-first(kk[1])
# println(n, typeof(n))
SymbolicSimplex(x.label, n, w)
end
function s(x::SymbolicSimplex, k::Integer)
n = dim(x)
if n == 24
error("SymbolicSimplex is limited to dimension at most 24")
end
@boundscheck if k < 0 || k > n
error("index outside the allowed range 0:$n")
end
@inbounds w = (x.v & bitmask[k+2]) | (x.v & ~bitmask[k+1]) << 5
@inbounds SymbolicSimplex(x.label, n+1, w)
end
function s(x::SymbolicSimplex, kk::AbstractVector{<:Integer})
w = UInt128(0)
v = x.v
l = length(kk)
n = dim(x)
if n+l > 24
error("SymbolicSimplex is limited to dimension at most 24")
end
for i in 1:l
@inbounds k = kk[i]+1
@boundscheck if k > n+i
error("indices outside the allowed range")
end
@inbounds w |= v & bitmask[k+1]
@inbounds v = (v & ~bitmask[k]) << 5
end
@inbounds SymbolicSimplex(x.label, n+l, w | v)
end
@inline function isdegenerate(x::SymbolicSimplex, k::Integer)
@boundscheck if k < 0 || k >= dim(x)
error("illegal arguments")
end
@inbounds xor(x.v, x.v >> 5) & bitmask[k+2] & ~bitmask[k+1] == 0
end
| SimplicialSets | https://github.com/matthias314/SimplicialSets.jl.git |
|
[
"MIT"
] | 0.2.0 | 9c7c79fd1a7bfe329908fd80360bb4fb9c4a07a2 | code | 1395 | #
# Szczarba operators
#
export szczarba
szczarba(x::AbstractSimplex, ::Tuple{}, ::Int, ::Int) = x
function szczarba(x::AbstractSimplex, ii::Tuple, k::Int, l::Int = 0)
# l keeps track of how often the simplicial operator has been derived
# println("x = $x ii = $ii k = $k l = $l")
i1 = first(ii)
i2 = Base.tail(ii)
if k < i1
szczarba(s(d(x, i1-k+l), l), i2, k, l+1)
elseif k == i1
szczarba(x, i2, k, l+1)
else
szczarba(s(x, l), i2, k-1, l+1)
end
end
function szczarba(x::AbstractSimplex, twf, ii::Tuple)
n = dim(x)
if x == 0
error("illegal arguments")
end
g = szczarba(inv(twf(x)), ii, 0)
for k in 1:n-1
x = d(x, 0)
g *= szczarba(inv(twf(x)), ii, k)
end
g
end
# TODO: sign
function foreach_szczarba(f, n::Int)
ii = zeros(Int, n)
while true
f(ii)
k = n
while ii[k] == k-1
ii[k] = 0
k -= 1
if k == 0
return
end
end
ii[k] += 1
end
end
function szczarba(x::AbstractSimplex, twf)
n = dim(x)
if n > 1
a = TODO
foreach_szczarba(n) do ii, ss
addcoeff!(a, szczarba(x, twf, ii), ss)
end
a
elseif n == 1
g = inv(twf(x))
Linear(g => 1, one(g) => -1)
else
error("illegal arguments")
end
end
| SimplicialSets | https://github.com/matthias314/SimplicialSets.jl.git |
|
[
"MIT"
] | 0.2.0 | 9c7c79fd1a7bfe329908fd80360bb4fb9c4a07a2 | code | 1145 | #
# twisted Cartesian products
#
export TwistedProductSimplex
struct TwistedProductSimplex{TWF, B<:AbstractSimplex, F<:AbstractSimplex} <: AbstractSimplex
b::B
f::F
twf::TWF
@inline function TwistedProductSimplex(b::B, f::F, twf::TWF) where {B <: AbstractSimplex, F <: AbstractSimplex, TWF}
@boundscheck if dim(b) != dim(f)
error("simplices must be of the same dimension")
end
new{TWF, B, F}(b, f, twf)
end
end
show(io::IO, x::TwistedProductSimplex) = print(io, "($(x.b),$(x.f))")
==(x::T, y::T) where T <: TwistedProductSimplex{TWF,B,F} where {TWF,B,F} = x.b == y.b && x.f == y.f
hash(x::TwistedProductSimplex, h::UInt) = hash(x.f, hash(x.b, h))
dim(x::TwistedProductSimplex) = dim(x.b)
function d(x::TwistedProductSimplex, k)
if k == 0
f = x.twf(x.b) * d(x.f, 0)
else
f = d(x.f, k)
end
TwistedProductSimplex(d(x.b, k), f, x.twf)
end
function s(x::TwistedProductSimplex, k)
TwistedProductSimplex(s(x.b, k), s(x.f, k), x.twf)
end
@propagate_inbounds isdegenerate(x::TwistedProductSimplex, k::Int) = isdegenerate(x.b, k) && isdegenerate(x.f, k)
| SimplicialSets | https://github.com/matthias314/SimplicialSets.jl.git |
|
[
"MIT"
] | 0.2.0 | 9c7c79fd1a7bfe329908fd80360bb4fb9c4a07a2 | code | 18364 | using Test, StructEqualHash, LinearCombinations, SimplicialSets
using SimplicialSets: d, s, Interval, interval_length
using SimplicialSets.TestHelpers
using LinearCombinations: diff, signed
const TENSORMAP = Tensor
#
# Interval
#
@testset "Interval" begin
for a in ( (2, 5), 2 => 5, 2:5 )
@test a isa Interval
@test interval_length(a) == 4
end
end
function test_simplex(x::AbstractSimplex, n)
@test dim(x) isa Integer
@test n == dim(x) >= 0
@test hash(x) isa UInt
xc = @inferred copy(x)
@test x == xc
@test hash(x) == hash(xc)
@test_throws Exception d(x, -1)
@test_throws Exception d(x, n+1)
n == 0 && @test_throws Exception d(x, 0)
@test_throws Exception s(x, -1)
@test_throws Exception s(x, n+1)
for i in 0:n
if n >= 1
y = @inferred d(x, i)
@test dim(y) == n-1 && typeof(y) == typeof(x)
end
y = @inferred s(x, i)
@test dim(y) == n+1 && typeof(y) == typeof(x)
@test @inferred isdegenerate(y)
@test @inferred isdegenerate(y, i)
!isdegenerate(x) && @test all(0:n) do j
@inferred(isdegenerate(y, j)) == (j == i)
end
end
if n >= 2
for j in 0:n, i in 0:j-1
@test d(d(x, j), i) == d(d(x, i), j-1)
end
end
for j in 0:n, i in 0:j
@test s(s(x, j), i) == s(s(x, i), j+1)
end
for j in 0:n, i in 0:j-1
@test d(s(x, Int8(j)), BigInt(i)) == s(d(x, Int16(i)), Int32(j-1))
end
for j in 0:n
@test d(s(x, j), j) == d(s(x, j), j+1) == x
end
for j in 0:n, i in j+2:n+1
@test d(s(x, Int16(j)), Int8(i)) == s(d(x, Int32(i-1)), BigInt(j))
end
# test d(x, kk)
for R in (Int8, Int, BigInt)
kk = R[k for k in 0:n if rand(Bool)]
length(kk) == n+1 && popfirst!(kk)
@test d(x, kk) == undo_basic(d(BasicSimplex(x), kk))
end
# test s(x, kk)
for R in (Int8, Int, BigInt)
kk = R[]
l = 0
for i in 0:div(n, 3)
k = rand(l:n+i)
push!(kk, k)
l = k+1
end
@test s(x, kk) == undo_basic(s(BasicSimplex(x), kk))
end
# test r(x, kk)
@test_throws Exception r(x, [])
@test_throws Exception r(x, [(1,2)])
for R in (Int8, Int, BigInt)
kk = UnitRange{R}[]
k2 = 0
while k2 <= (n <= 2 ? n-1 : n-2)
k1 = rand(k2:n)
k2 = rand(k1:n)
push!(kk, R(k1):R(k2))
end
if !isempty(kk)
y = SimplicialSets.r(x, kk)
@test dim(y) == sum(map(length, kk))-1
@test y == undo_basic(SimplicialSets.r(BasicSimplex(x), kk))
end
end
end
@testset failfast=true "BasicSimplex" begin
for n in 0:3
x = SymbolicSimplex('x', n)
y = BasicSimplex(x)
@test dim(y) == dim(x)
n > 0 && @test all(0:n) do k ; d(y, k) == BasicSimplex(d(y.x, k)) end
@test all(0:n) do k ; s(y, k) == BasicSimplex(s(y.x, k)) end
test_simplex(y, n)
end
x = BarSimplex([Lattice(1,0)]; op = +)
y = BarSimplex([Lattice(0,1)]; op = +)
p = LoopGroupSimplex(SymbolicSimplex('p', 3))
q = LoopGroupSimplex(SymbolicSimplex('q', 3))
for (x1, x2) in ( (x, y), (p, q) )
y1 = BasicSimplex(x1)
y2 = BasicSimplex(x2)
@test *(y1) == y1
@test y1 * y2 * y1 == BasicSimplex(x1 * x2 * x1)
@test inv(y1) == BasicSimplex(inv(x1))
if x1 isa BarSimplex{AddToMul{Lattice{2}}}
# / is defined for commutative groups only
@test y1 / y2 == BasicSimplex(x1 / x2)
else
@test_throws Exception y1 / y2
end
@test one(y1) == BasicSimplex(one(x1))
@test one(y1, 5) == BasicSimplex(one(x1, 5))
@test one(typeof(y1)) == BasicSimplex(one(typeof(x1)))
@test one(typeof(y1), 4) == BasicSimplex(one(typeof(x1), 4))
end
end
function test_group(x::T, is_commutative) where T <: AbstractSimplex
n = dim(x)
onex = @inferred one(x)
test_simplex(onex, n)
@test isone(onex)
@test one(x, Int8(n+1)) == one(T, BigInt(n+1))
@test one(T) == one(T, 0)
@test_throws Exception one(x, -1)
for i in 0:n
n > 0 && @test d(onex, i) == one(x, n-1)
@test s(onex, i) == one(x, n+1)
end
@test *(x) == x
@test x * onex == x == onex * x
@test isone(x * inv(x)) && isone(inv(x) *x)
@test_throws Exception x*one(T, n+1)
@test @inferred(x^0) == onex
@test @inferred(x^1) == x
@test @inferred(x^3) == x*x*x
@test @inferred(x^(-1)) == inv(x)
@test @inferred(x^(-2)) == inv(x)^2
invx = @inferred inv(x)
@test dim(invx) == n
for i in 0:n
n > 0 && @test d(invx, i) == inv(d(x, i))
@test s(invx, i) == inv(s(x, i))
end
a = Linear(x => Int8(1), inv(x) => Int8(2))
@test @inferred(one(a)) == Linear(one(T) => one(Int8))
@test isone(one(a))
@test a * one(a) == a == one(a) * a
if iseven(n)
@test a*a*a == @inferred a^3
else
is_commutative && @test iszero(a*a)
end
end
function test_group(x::T, y::T, is_commutative) where T <: AbstractSimplex
k, l = dim(x), dim(y)
if k == l
xy = @inferred x*y
@test dim(xy) == k
for i in 0:k
k > 0 && @test d(xy, i) == d(x, i)*d(y, i)
@test s(xy, i) == s(x, i)*s(y, i)
end
@test xy*x == x*y*x == x*(y*x)
if is_commutative
@test y*x == xy
@test x/y == x*inv(y)
else
@test_throws Exception x/y
end
else
@test_throws Exception x*y
end
a = Linear{T,BigInt}(x => 2)
b = Linear{T,Float32}(y => 3)
ab = @inferred a*b
@test coefftype(ab) == promote_type(BigInt, Float32) && termtype(ab) == T
!iszero(ab) && @test deg(ab) == k+l
is_commutative && @test b*a == (-1)^(k*l) * ab
@test diff(ab) == diff(a)*b + (-1)^k*a*diff(b)
end
@testset failfast=true "SymbolicSimplex" begin
for n in (0, 1, 2, 14)
x = SymbolicSimplex(:x, BigInt(n))
test_simplex(x, n)
y = SymbolicSimplex('y', 1:2:2*n+1)
test_simplex(y, n)
v = sort!(rand(UInt8(0):UInt8(31), n+1))
z = SymbolicSimplex(:z, v)
test_simplex(z, n)
@test isdegenerate(z) == !allunique(v)
end
end
@testset failfast=true "ProductSimplex" begin
@test_throws Exception ProductSimplex()
@test_throws Exception ProductSimplex(())
x = SymbolicSimplex('x', 2)
y = SymbolicSimplex('y', 3)
@test_throws Exception ProductSimplex(x, y)
@test_throws Exception ProductSimplex(x; dim = 3)
@test_throws Exception ProductSimplex(x, y; dim = 3)
for n in 0:3
xv = ntuple(k -> BasicSimplex(SymbolicSimplex('a'+k, n)), 4)
for k in 0:4
@test_throws Exception ProductSimplex(xv[1:k]; dim = -1)
w = @inferred ProductSimplex(xv[1:k]; dim = BigInt(n))
@test w == @inferred ProductSimplex(xv[1:k]...; dim = n)
if k > 0
v = @inferred ProductSimplex(xv[1:k])
@test v == w
end
test_simplex(w, n)
end
end
end
@testset failfast=true "LoopGroupSimplex" begin
x = SymbolicSimplex('x', 3)
xx = LoopGroupSimplex(x)
xc = LoopGroupSimplex(copy(x))
@test xc == copy(xx) !== xx
m = 3
for n in 1:4
# xv = ntuple(k -> LoopGroupSimplex(SymbolicSimplex('a'+k, n)), m)
xv = ntuple(k -> LoopGroupSimplex(BasicSimplex(SymbolicSimplex('a'+k, n))), m)
u = xv[1]
for k in 0:m, l in 0:m
v = prod(xv[1:k]; init = one(u))
w = prod(xv[m-l+1:m]; init = one(u))
test_simplex(v, n-1)
test_group(v, false)
test_group(v, w, false)
end
end
end
function test_barsimplex(n, m, groupsimplex, is_commutative)
for k in 0:n
T = typeof(groupsimplex(0, m))
v = T[groupsimplex(i-1, m) for i in 1:k]
x = @inferred BarSimplex(v)
xc = BarSimplex(copy(v))
@test xc == copy(x) !== x
test_simplex(x, k)
is_commutative || continue
test_group(x, true)
for l in 0:n
y = BarSimplex(T[groupsimplex(i-1, m) for i in 1:l])
test_group(x, y, true)
end
end
end
# random_lattice_element(_, m) = AddToMul(Lattice(Tuple(rand(-3:3, m))))
# we need an inferrable return type
# random_lattice_element(_, _)::AddToMul{Lattice{3}} = AddToMul(Lattice(ntuple(_ -> rand(-3:3), 3)))
random_lattice_element(_, _) = AddToMul(Lattice(ntuple(_ -> rand(-3:3), 3)))
struct M
a::Matrix{Rational{Int}}
end
@struct_equal_hash M
Base.one(::Type{M}) = M([1 0; 0 1])
Base.one(::M) = one(M)
Base.isone(x::M) = isone(x.a)
Base.inv(x::M) = M(inv(x.a))
Base.:*(x::M, ys::M...) = M(*(x.a, map(y -> y.a, ys)...))
# Base.:/(x::M, y::M) = x*inv(y)
function random_matrix_2x2(_, _)
a11 = rand(1:8)
a22 = rand(1:8)
b = a11*a22+1
a12 = findfirst(i -> rem(b, i) == 0, 2:b) + 1
a21 = div(b, a12)
M([a11 a12; a21 a22])
end
@testset failfast=true "BarSimplex discrete" begin
test_barsimplex(4, 3, random_lattice_element, true)
test_barsimplex(4, 3, random_matrix_2x2, false)
end
function random_loopgroupsimplex(n, m)
prod(LoopGroupSimplex(SymbolicSimplex(rand('a':'z'), n+1)) for _ in 1:m)
end
function random_barsimplex(n, m)
BarSimplex([random_lattice_element(0, m) for i in 1:n])
end
function random_barbarsimplex(n, m)
BarSimplex([random_barsimplex(i-1, m) for i in 1:n])
end
@testset failfast=true "BarSimplex simplicial" begin
test_barsimplex(4, 3, random_barsimplex, true) # test double bar construction of Z^3
test_barsimplex(3, 3, random_barbarsimplex, true) # test triple bar construction of Z^3
test_barsimplex(4, 3, BasicSimplex ∘ random_loopgroupsimplex, false)
# it seems that the multiplication of chains on the bar construction of a loop group is graded commutative!
end
#=
q2(n) = div(n*(n+1), 2)
qsign(a::Linear) = Linear(x => signed(q2(deg(x)), c) for (x, c) in a)
opp_linear(a::Linear) = qsign(opposite(a))
opp_swap(a::Linear{<:ProductSimplex}) =
# deg(x) == degree of ProductSimplex(x, y)
Linear(opposite(ProductSimplex(y, x)) => signed(q2(deg(x)), c) for ((x, y), c) in a)
opp_swap(a::Linear{<:Tensor}) =
Linear(opposite(Tensor(y, x)) => signed(q2(deg(x)+deg(y)), c) for ((x, y), c) in a)
=#
const opposite_swap = opposite ∘ swap
@testset failfast=true "OppositeSimplex" begin
for n in (0, 1, 4)
x = SymbolicSimplex(:x, n)
test_simplex(x, n)
test_simplex(BasicSimplex(x), n)
x = random_loopgroupsimplex(n, 2)
y = random_loopgroupsimplex(n, 2)
test_simplex(x, n)
test_group(x, false)
test_group(x, y, false)
x = random_barsimplex(n, 2)
y = random_barsimplex(n, 2)
test_simplex(x, n)
test_group(x, true)
test_group(x, y, true)
end
for n in 0:8
x = SymbolicSimplex(:x, n)
a = Linear(x => 1)
@test opposite(diff(a)) == diff(opposite(a))
end
end
@testset failfast=true "ez" begin
@test @inferred(ez(Tensor())) == Linear(ProductSimplex(; dim = 0) => 1)
for k in 0:8
t = Tensor(ntuple(Returns(SymbolicSimplex('i', 1)), k))
@inferred ez(t)
@inferred ez(t; coefftype = Val(Float16))
end
for m in 0:4, n in 0:4
x = SymbolicSimplex(:x, m)
y = SymbolicSimplex(:y, n)
a = tensor(x, y)
c1 = a |> ez |> opposite_swap
c2 = a |> opposite_swap |> ez
@test c1 == c2
end
n = 3
x = random_loopgroupsimplex(n, 2)
y = random_loopgroupsimplex(n, 2)
z = random_loopgroupsimplex(n, 2)
t = Tensor(BasicSimplex.((x, y, z)))
@test ez(undo_basic(t)) == undo_basic(ez(t))
rgr = regroup( :( (1,(2,3)) ), :( (1,2,3) ) )
rgl = regroup( :( ((1,2),3) ), :( (1,2,3) ) )
@test rgr(ez(x, ez(y, z))) == ez(x, y, z) == rgl(ez(ez(x, y), z))
w = SymbolicSimplex('w', 0)
@test @inferred(ez(Tensor(w, w))) == Linear(ProductSimplex(w, w) => 1)
@test ez(x, w) == Linear(ProductSimplex(x, s(w, 0:n-1)) => 1)
@test ez(w, x) == Linear(ProductSimplex(s(w, 0:n-1), x) => 1)
a = Linear(Tensor(x,y,z) => 1)
@test diff(ez(a)) == ez(diff(a))
end
@testset failfast=true "aw" begin
@test @inferred(aw(ProductSimplex(; dim = 0))) == Linear(Tensor() => 1)
@test iszero(aw(ProductSimplex(; dim = 1)))
for k in 0:8
w = ProductSimplex(ntuple(Returns(SymbolicSimplex('i', 1)), k); dim = 1)
@inferred aw(w)
@inferred aw(w; coefftype = Val(Float32))
end
for n in 0:8
x = SymbolicSimplex('x', n)
y = SymbolicSimplex('y', n)
w = ProductSimplex(x, y)
b = Linear(w => 1)
c1 = b |> aw |> opposite_swap
c2 = b |> opposite_swap |> aw
@test c1 == c2
end
n = 3
x = random_loopgroupsimplex(n, 2)
y = random_loopgroupsimplex(n, 2)
z = random_loopgroupsimplex(n, 2)
w = ProductSimplex(x, y, z)
w = ProductSimplex(BasicSimplex.((x, y, z)))
@test aw(undo_basic(w)) == undo_basic(aw(w))
rgr, rgri = regroup_inv( :( (1,(2,3)) ), :( (1,2,3) ) )
rgl, rgli = regroup_inv( :( ((1,2),3) ), :( (1,2,3) ) )
a = w |> aw
ar = w |> rgri |> aw |> TENSORMAP(identity, aw) |> rgr
al = w |> rgli |> aw |> TENSORMAP(aw, identity) |> rgl
@test ar == a == al
w = SymbolicSimplex('w', 0)
@test @inferred(aw(ProductSimplex(w, w))) == Linear(Tensor(w, w) => 1)
@test aw(ProductSimplex(x, s(w, 0:n-1))) == Linear(Tensor(x, w) => 1)
@test aw(ProductSimplex(s(w, 0:n-1), x)) == Linear(Tensor(w, x) => 1)
a = Linear(ProductSimplex(x,y,z) => 1)
@test diff(aw(a)) == aw(diff(a))
end
@testset failfast=true "shih" begin
for n in 0:8
x = SymbolicSimplex('x', n)
y = SymbolicSimplex('y', n)
w = ProductSimplex(x, y)
@inferred shih_eml(w; coefftype = Val(Int16))
@inferred shih_opp(w; coefftype = Val(Int32))
b = Linear(w => 1)
c1 = b |> shih_eml |> opposite_swap
c2 = b |> opposite_swap |> shih_opp
@test c1 == c2
end
n = 3
x = random_loopgroupsimplex(n, 2)
y = random_loopgroupsimplex(n, 2)
z = random_loopgroupsimplex(n, 2)
w = ProductSimplex(x, y)
b = Linear(w => 1)
c = Linear(ProductSimplex(BasicSimplex(x), BasicSimplex(y)) => 1)
u = SymbolicSimplex('u', 0)
v = ProductSimplex(x, y, z)
a = Linear(v => 1)
rgr, rgri = regroup_inv( :( (1,(2,3)) ), :( (1,2,3) ) )
rgl, rgli = regroup_inv( :( ((1,2),3) ), :( (1,2,3) ) )
for shih in (shih_eml, shih_opp)
@test shih(undo_basic(c)) == undo_basic(shih(b))
@test iszero(shih(shih(b)))
@test iszero(shih(ProductSimplex(u, u)))
c1 = a |> rgri |> shih |> rgr |> rgli |> shih |> rgl
c2 = a |> rgli |> shih |> rgl |> rgri |> shih |> rgr
@test c1 == -c2
end
end
@testset failfast=true "EZ relations" begin
for n in (0, 1, 4)
x = SymbolicSimplex('x', n)
y = SymbolicSimplex('y', n)
a = tensor(x, y)
w = ProductSimplex(x, y)
b = Linear(w => 1)
@test aw(ez(a)) == a
for shih in (shih_eml, shih_opp)
@test ez(aw(b)) - b == diff(shih(b)) + shih(diff(b))
end
end
end
@testset failfast=true "Hirsch formula" begin
f11, g21 = regroup_inv( :( (1,2,3) ), :( (1,(2,3)) ) )
f31, g11 = regroup_inv( :( (1,2,3) ), :( ((1,2),3) ) )
f21 = regroup( :( (1,2,3) ), :( (2,(1,3)) ) )
g31 = regroup( :( ((2,1),3) ), :( (2,3,1) ) )
for n in (0, 1, 2, 4)
x = SymbolicSimplex('x', n)
y = SymbolicSimplex('y', n)
z = SymbolicSimplex('z', n)
a = Linear( ProductSimplex(x, y, z) => 1 )
u = a |> f11 |> shih |> swap |> g11 |> aw
v = a |> f21 |> aw |> TENSORMAP(identity, aw ∘ swap ∘ shih) |> g21
w = a |> f31 |> aw |> TENSORMAP(aw ∘ swap ∘ shih, identity) |> g31
@test u == v + w
end
end
@testset failfast=true "AAFR formula" begin
# we check a formula from
# Alvarez, V.; Armario, J. A.; Frau, M. D.; Real, P.
# Algebra structures on the twisted Eilenberg-Zilber theorem
# TODO: what is the formula?
n = 3
x = SymbolicSimplex('x', n)
y = SymbolicSimplex('y', n)
z = SymbolicSimplex('z', n)
w = SymbolicSimplex('w', n)
xy = ProductSimplex(x, y)
zw = ProductSimplex(z, w)
a = Linear(xy => 1)
b = Linear(zw => 1)
f = regroup( :( ((1,2),(3,4)) ), :( ((1,3),(2,4)) ) )
t1 = ez(x, y)
t2 = shih(b)
t3 = ez(tensor(t1, t2))
t4 = f(t3)
t5 = shih(t4)
@test iszero(t5)
t1 = shih(a)
t2 = ez(z, w)
t3 = ez(tensor(t1, t2))
t4 = f(t3)
t5 = shih(t4)
@test iszero(t5)
t1 = shih(a)
t2 = shih(b)
t3 = ez(tensor(t1, t2))
t4 = f(t3)
t5 = shih(t4)
@test iszero(t5)
end
@testset failfast=true "surjections" begin
surj = @inferred Surjection{0}(Int[])
@test @inferred arity(surj) == 0
@test @inferred deg(surj) == 0
surj = @inferred Surjection{4}(Int[1,2,3,2,2,1,4])
@test @inferred arity(surj) == 4
@test @inferred deg(surj) == 3
@test @inferred isdegenerate(surj)
@test iszero(Linear(surj => 1))
surj = Surjection([1,3,2,1,4,2,1])
a = Linear(surj => 1)
b = @inferred diff(a)
@test typeof(b) == typeof(a)
@test iszero(diff(b))
c = @inferred diff(a; coeff = 2)
@test typeof(c) == typeof(a)
@test c == 2*b
c2 = @inferred diff(a; addto = c, coeff = -2)
@test c2 === c
@test iszero(c2)
end
@testset failfast=true "interval cuts" begin
surj = Surjection(Int[])
y = SymbolicSimplex('y', 0)
@test surj(y; coeff = -1) == Linear(Tensor() => -1)
s12 = Surjection(1:2)
s21 = Surjection([2,1])
f4 = TENSORMAP(coprod, coprod) ∘ coprod
rg = regroup(:(((1,2),(3,4))), :((1,2,3,4)))
y = SymbolicSimplex('y', 4)
z = random_loopgroupsimplex(4, 3)
w = random_barsimplex(4, 3)
for x in (BasicSimplex(y), y, ProductSimplex(y, y), z, w)
surj = Surjection(Int[])
@test iszero(surj(x))
cx = coprod(x)
@test s12(x) == cx
@test s21(x) == swap(cx)
s1234 = Surjection(1:4)
@test rg(f4(x)) == s1234(x)
surj = Surjection([1,3,3,2])
@test iszero(surj(x))
for n in 0:8
surj = Surjection(1:n)
@inferred surj(x)
end
end
end
| SimplicialSets | https://github.com/matthias314/SimplicialSets.jl.git |
|
[
"MIT"
] | 0.2.0 | 9c7c79fd1a7bfe329908fd80360bb4fb9c4a07a2 | docs | 3421 | # SimplicialSets.jl
This packages provides functions to work with simplicial sets. Various kinds
of simplicial sets are supported, including symbolic simplices, products,
bar constructions and Kan loop groups.
The Eilenberg-Zilber maps and interval cut operations are also implemented.
**Docstrings and other documentation are under construction.
Below we illustrate some of the available functionality.**
The package uses [LinearCombinations.jl](https://github.com/matthias314/LinearCombinations.jl)
to represent formal linear combinations of simplices. By default, coefficients are of type `Int`.
## Examples
### Basic operations and symbolic simplices
```julia
julia> using LinearCombinations, SimplicialSets
julia> x = SymbolicSimplex(:x, 4)
x[0,1,2,3,4]
julia> dim(x)
4
julia> using SimplicialSets: d, s
julia> d(x, 2)
x[0,1,3,4]
julia> s(x, 2)
x[0,1,2,2,3,4]
julia> isdegenerate(x), isdegenerate(s(x, 2))
(false, true)
julia> using LinearCombinations: diff
julia> diff(x)
x[0,1,2,3]-x[0,2,3,4]+x[1,2,3,4]+x[0,1,3,4]-x[0,1,2,4]
```
### Loop groups
```julia
julia> x, y = SymbolicSimplex(:x, 3), SymbolicSimplex(:y, 3)
(x[0,1,2,3], y[0,1,2,3])
julia> u, v = LoopGroupSimplex(x), LoopGroupSimplex(y)
(⟨x[0,1,2,3]⟩, ⟨y[0,1,2,3]⟩)
julia> w = u*v
⟨x[0,1,2,3],y[0,1,2,3]⟩
julia> inv(w)
⟨y[0,1,2,3]⁻¹,x[0,1,2,3]⁻¹⟩
julia> diff(w)
-⟨x[0,1,3],y[0,1,3]⟩+⟨x[0,1,2],y[0,1,2]⟩
+⟨x[1,2,3]⁻¹,x[0,2,3],y[1,2,3]⁻¹,y[0,2,3]⟩
```
### Eilenberg-Zilber maps
```julia
julia> x, y = SymbolicSimplex(:x, 2), SymbolicSimplex(:y, 2)
(x[0,1,2], y[0,1,2])
julia> z = ProductSimplex(x, y)
(x[0,1,2],y[0,1,2])
julia> ez(x, y) # shuffle map
-(x[0,0,1,1,2],y[0,1,1,2,2])+(x[0,1,2,2,2],y[0,0,0,1,2])
+(x[0,0,1,2,2],y[0,1,1,1,2])+(x[0,0,0,1,2],y[0,1,2,2,2])
+(x[0,1,1,1,2],y[0,0,1,2,2])-(x[0,1,1,2,2],y[0,0,1,1,2])
julia> aw(z) # Alexander-Whitney map
x[0,1]⊗y[1,2]+x[0]⊗y[0,1,2]+x[0,1,2]⊗y[2]
julia> shih(z) # Eilenberg-MacLane homotopy
-(x[0,1,1,2],y[0,1,2,2])+(x[0,0,1,1],y[0,1,1,2])
-(x[0,0,0,1],y[0,1,2,2])+(x[0,0,1,2],y[0,2,2,2])
```
Let's check that `shih` is indeed a homotopy from the identity to `ez∘aw`:
```julia
julia> diff(shih(z)) + shih(diff(z)) == ez(aw(z)) - z
true
```
Let's verify the "side conditions" for the Eilenberg-Zilber maps:
```julia
julia> shih(ez(x, y)), aw(shih(z)), shih(shih(z))
(0, 0, 0)
```
Let's check that the shuffle map is commutative:
```julia
julia> x, y = SymbolicSimplex(:x, 1), SymbolicSimplex(:y, 3)
(x[0,1], y[0,1,2,3])
julia> t = tensor(x, y)
x[0,1]⊗y[0,1,2,3]
julia> ez(t)
-(x[0,0,1,1,1],y[0,1,1,2,3])+(x[0,0,0,1,1],y[0,1,2,2,3])
+(x[0,1,1,1,1],y[0,0,1,2,3])-(x[0,0,0,0,1],y[0,1,2,3,3])
julia> a = swap(ez(t))
-(y[0,1,2,3,3],x[0,0,0,0,1])-(y[0,1,1,2,3],x[0,0,1,1,1])
+(y[0,0,1,2,3],x[0,1,1,1,1])+(y[0,1,2,2,3],x[0,0,0,1,1])
julia> swap(t)
-y[0,1,2,3]⊗x[0,1]
julia> b = ez(swap(t))
-(y[0,1,2,3,3],x[0,0,0,0,1])-(y[0,1,1,2,3],x[0,0,1,1,1])
+(y[0,0,1,2,3],x[0,1,1,1,1])+(y[0,1,2,2,3],x[0,0,0,1,1])
julia> a == b
true
```
### Interval cut operations
```julia
julia> sj = Surjection([1,3,2,1])
Surjection{3}([1, 3, 2, 1])
julia> x = SymbolicSimplex(:x, 2)
x[0,1,2]
julia> sj(x)
-x[0,1,2]⊗x[1]⊗x[0,1]-x[0,2]⊗x[1,2]⊗x[0,1]-x[0,1,2]⊗x[0,1]⊗x[0]
-x[0,1,2]⊗x[2]⊗x[1,2]+x[0,2]⊗x[0,1,2]⊗x[0]+x[0,2]⊗x[2]⊗x[0,1,2]
-x[0,1,2]⊗x[1,2]⊗x[1]
julia> diff(sj)
Surjection{3}([2, 3, 1])-Surjection{3}([1, 2, 3])
julia> diff(sj(x)) == diff(sj)(x) + (-1)^deg(sj) * sj(diff(x))
true
```
| SimplicialSets | https://github.com/matthias314/SimplicialSets.jl.git |
|
[
"MIT"
] | 0.1.1 | 116831207eb15bcfa0844a4de6cc07929ecef0de | code | 847 | module PairAsPipe
export @pap
using MacroTools
using MacroTools: @capture
macro pap(ex)
has_newcol = @capture(ex, newcol_ = rhs_)
if !has_newcol
rhs = ex
end
# for obtaining symbols
symbols = QuoteNode[]
gen_symbols = Symbol[]
rhs = MacroTools.postwalk(function(x)
if x isa QuoteNode
push!(symbols, x)
push!(gen_symbols, MacroTools.gensym(x.value))
return gen_symbols[end]
else
return x
end
end, rhs)
lhs = Expr(:tuple, gen_symbols...)
# the fn in
# :col => fn
fn = Expr(:->, lhs, rhs)
# the [:col1, :col2] in
# the [:col1, :col2] => fn
cols = Expr(:vect, symbols...)
if has_newcol
fn = Expr(:call, :(=>), fn, QuoteNode(newcol))
end
esc(Expr(:call, :(=>), cols, fn))
end
end
| PairAsPipe | https://github.com/xiaodaigh/PairAsPipe.jl.git |
|
[
"MIT"
] | 0.1.1 | 116831207eb15bcfa0844a4de6cc07929ecef0de | code | 269 | using PairAsPipe
using Test
@testset "PairAsPipe.jl" begin
# Write your tests here.
end
using DataFrames, DataFramesMeta, Pipe
data = DataFrame(a = rand(1:8, 100), b = rand(1:8, 100), c = rand(100))
@pipe data |>
groupby(_, :a) |>
@orderby(_, :b)
| PairAsPipe | https://github.com/xiaodaigh/PairAsPipe.jl.git |
|
[
"MIT"
] | 0.1.1 | 116831207eb15bcfa0844a4de6cc07929ecef0de | docs | 876 | ## PairAsPipe.jl
**P**air**A**s**P**ipe (`@pap`) is friendly with DataFrames.jl's API.
The macro `@pap` is designed to transform `newcol = fn(:col)` to `:col => fn => :newcol` which is an elegant (ab)use of pairs syntax (`a => b`) as pipes. Hence the name of the package.
### Usage
Some examples
```julia
using DataFrames, PairAsPipe
df = DataFrame(a = 1:3)
transform(df, @pap b = :a .* 2) # same as transform(df, :a => a->a.*2 => :b)
transform(df, @pap :a .* 2) # same as transform(df, :a => a->a.*2); except for output column name
transform(df, @pap sum(:a)) # same as transform(df, :a => mean); except for output column name
filter(@pap(:a == 1), df) # same as filter([:a] => a -> a == 1, df)
```
### Similar Work
* [DataFramesMacros.jl](https://github.com/matthieugomez/DataFramesMacros.jl)
* [DataFramesMeta.jl](https://github.com/JuliaData/DataFramesMeta.jl)
| PairAsPipe | https://github.com/xiaodaigh/PairAsPipe.jl.git |
|
[
"MIT"
] | 2.3.0 | 3a86c9b6f29a5ed3a5ffd9bffb0d4010e2e91f22 | code | 2436 | ### benchmark.jl --- Benchmark Cuba.jl and Cuba C Library
using Cuba, Printf
const ndim=3
const ncomp=11
const atol=1e-8
const rtol=1e-8
rsq(x,y,z) = abs2(x) + abs2(y) + abs2(z)
t1(x,y,z) = sin(x)*cos(y)*exp(z)
t2(x,y,z) = 1.0/((x + y)*(x + y) + 0.003)*cos(y)*exp(z)
t3(x,y,z) = 1.0/(3.75 - cos(pi*x) - cos(pi*y) - cos(pi*z))
t4(x,y,z) = abs(rsq(x,y,z) - 0.125)
t5(x,y,z) = exp(-rsq(x,y,z))
t6(x,y,z) = 1.0/(1.0 - x*y*z + 1e-10)
t7(x,y,z) = sqrt(abs(x - y - z))
t8(x,y,z) = exp(-x*y*z)
t9(x,y,z) = abs2(x)/(cos(x + y + z + 1.0) + 5.0)
t10(x,y,z) = (x > 0.5) ? 1.0/sqrt(x*y*z + 1e-5) : sqrt(x*y*z)
t11(x,y,z) = (rsq(x,y,z) < 1.0) ? 1.0 : 0.0
function test(x::Vector{Float64}, f::Vector{Float64})
@inbounds f[1] = t1( x[1], x[2], x[3])
@inbounds f[2] = t2( x[1], x[2], x[3])
@inbounds f[3] = t3( x[1], x[2], x[3])
@inbounds f[4] = t4( x[1], x[2], x[3])
@inbounds f[5] = t5( x[1], x[2], x[3])
@inbounds f[6] = t6( x[1], x[2], x[3])
@inbounds f[7] = t7( x[1], x[2], x[3])
@inbounds f[8] = t8( x[1], x[2], x[3])
@inbounds f[9] = t9( x[1], x[2], x[3])
@inbounds f[10] = t10(x[1], x[2], x[3])
@inbounds f[11] = t11(x[1], x[2], x[3])
end
@info "Performance of Cuba.jl:"
for alg in (vegas, suave, divonne, cuhre)
# Run the integrator a first time to compile the function.
alg(test, ndim, ncomp, atol=atol,
rtol=rtol);
start_time = time_ns()
alg(test, ndim, ncomp, atol=atol,
rtol=rtol);
end_time = time_ns()
println(@sprintf("%10.6f", Int(end_time - start_time)/1e9),
" seconds (", uppercasefirst(string(nameof(alg))), ")")
end
cd(@__DIR__) do
if mtime("benchmark.c") > mtime("benchmark-c")
run(`gcc -O3 -I $(Cuba.Cuba_jll.artifact_dir)/include -o benchmark-c benchmark.c $(Cuba.Cuba_jll.libcuba_path) -lm`)
end
@info "Performance of Cuba Library in C:"
withenv(Cuba.Cuba_jll.JLLWrappers.LIBPATH_env => Cuba.Cuba_jll.LIBPATH[]) do
run(`./benchmark-c`)
end
if success(`which gfortran`)
if mtime("benchmark.f") > mtime("benchmark-fortran")
run(`gfortran -O3 -fcheck=no-bounds -cpp -o benchmark-fortran benchmark.f $(Cuba.Cuba_jll.libcuba_path) -lm`)
end
@info "Performance of Cuba Library in Fortran:"
withenv(Cuba.Cuba_jll.JLLWrappers.LIBPATH_env => Cuba.Cuba_jll.LIBPATH[]) do
run(`./benchmark-fortran`)
end
end
end
| Cuba | https://github.com/giordano/Cuba.jl.git |
|
[
"MIT"
] | 2.3.0 | 3a86c9b6f29a5ed3a5ffd9bffb0d4010e2e91f22 | code | 223 | using Documenter, Cuba
makedocs(
modules = [Cuba],
sitename = "Cuba",
strict = true,
)
deploydocs(
repo = "github.com/giordano/Cuba.jl.git",
target = "build",
deps = nothing,
make = nothing,
)
| Cuba | https://github.com/giordano/Cuba.jl.git |
|
[
"MIT"
] | 2.3.0 | 3a86c9b6f29a5ed3a5ffd9bffb0d4010e2e91f22 | code | 8534 | ### Cuba.jl --- Julia library for multidimensional numerical integration.
__precompile__()
module Cuba
using Cuba_jll
export vegas, suave, divonne, cuhre
### Default values of parameters
# Common arguments.
const NVEC = 1
const RTOL = 1e-4
const ATOL = 1e-12
const FLAGS = 0
const SEED = 0
const MINEVALS = 0
const MAXEVALS = 1000000
const STATEFILE = ""
const SPIN = C_NULL
# Vegas-specific arguments.
const NSTART = 1000
const NINCREASE = 500
const NBATCH = 1000
const GRIDNO = 0
# Suave-specific arguments.
const NNEW = 1000
const NMIN = 2
const FLATNESS = 25.0
# Divonne-specific arguments.
const KEY1 = 47
const KEY2 = 1
const KEY3 = 1
const MAXPASS = 5
const BORDER = 0.0
const MAXCHISQ = 10.0
const MINDEVIATION = 0.25
const NGIVEN = 0
const LDXGIVEN = 0
const XGIVEN = 0
const NEXTRA = 0
const PEAKFINDER = C_NULL
# Cuhre-specific argument.
const KEY = 0
### Functions
# Note on implementation: instead of passing the function that performs
# calculations as "integrand" argument to integrator routines, we pass the
# pointer to this function and use "func_" to actually perform calculations.
# Without doing so and trying to "cfunction" a function not yet defined we would
# run into a
#
# cfunction: no method exactly matched the required type signature (function not yet c-callable)
#
# error. See http://julialang.org/blog/2013/05/callback for more information on
# this, in particular the section about "qsort_r" ("Passing closures via
# pass-through pointers"). Thanks to Steven G. Johnson for pointing to this.
#
# For better performance, when nvec == 1 we store a simple Vector, instead of a Matrix with
# second dimension equal to 1.
function generic_integrand!(ndim::Cint, x_::Ptr{Cdouble}, ncomp::Cint,
f_::Ptr{Cdouble}, func!)
# Get arrays from "x_" and "f_" pointers.
x = unsafe_wrap(Array, x_, (ndim,))
f = unsafe_wrap(Array, f_, (ncomp,))
func!(x, f)
return Cint(0)
end
function generic_integrand_userdata!(ndim::Cint, x_::Ptr{Cdouble}, ncomp::Cint,
f_::Ptr{Cdouble}, func!_and_userdata)
func!, userdata = func!_and_userdata
# Get arrays from "x_" and "f_" pointers.
x = unsafe_wrap(Array, x_, (ndim,))
f = unsafe_wrap(Array, f_, (ncomp,))
func!(x, f, userdata)
return Cint(0)
end
function generic_integrand!(ndim::Cint, x_::Ptr{Cdouble}, ncomp::Cint,
f_::Ptr{Cdouble}, func!, nvec::Cint)
# Get arrays from "x_" and "f_" pointers.
x = unsafe_wrap(Array, x_, (ndim, nvec))
f = unsafe_wrap(Array, f_, (ncomp, nvec))
func!(x, f)
return Cint(0)
end
function generic_integrand_userdata!(ndim::Cint, x_::Ptr{Cdouble}, ncomp::Cint,
f_::Ptr{Cdouble}, func!_and_userdata, nvec::Cint)
func!, userdata = func!_and_userdata
# Get arrays from "x_" and "f_" pointers.
x = unsafe_wrap(Array, x_, (ndim, nvec))
f = unsafe_wrap(Array, f_, (ncomp, nvec))
func!(x, f, userdata)
return Cint(0)
end
# Return pointer for "integrand", to be passed as "integrand" argument to Cuba functions.
integrand_ptr(integrand::T) where {T} = @cfunction(generic_integrand!, Cint,
(Ref{Cint}, # ndim
Ptr{Cdouble}, # x
Ref{Cint}, # ncomp
Ptr{Cdouble}, # f
Ref{T})) # userdata
integrand_ptr_userdata(integrand::T, userdata::D) where {T, D} = @cfunction(generic_integrand_userdata!, Cint,
(Ref{Cint}, # ndim
Ptr{Cdouble}, # x
Ref{Cint}, # ncomp
Ptr{Cdouble}, # f
Ref{Tuple{T, D}})) # userdata
integrand_ptr_nvec(integrand::T) where {T} = @cfunction(generic_integrand!, Cint,
(Ref{Cint}, # ndim
Ptr{Cdouble}, # x
Ref{Cint}, # ncomp
Ptr{Cdouble}, # f
Ref{T}, # userdata
Ref{Cint})) # nvec
integrand_ptr_nvec_userdata(integrand::T, userdata::D) where {T, D} = @cfunction(generic_integrand_userdata!, Cint,
(Ref{Cint}, # ndim
Ptr{Cdouble}, # x
Ref{Cint}, # ncomp
Ptr{Cdouble}, # f
Ref{Tuple{T, D}}, # userdata
Ref{Cint})) # nvec
abstract type Integrand{T} end
function __init__()
Cuba.cores(0, 10000)
end
# handle keyword deprecation
function tols(atol,rtol,abstol,reltol)
if !ismissing(abstol) || !ismissing(reltol)
Base.depwarn("abstol and reltol keywords are now atol and rtol, respectively", :quadgk)
end
return coalesce(abstol,atol), coalesce(reltol,rtol)
end
include("cuhre.jl")
include("divonne.jl")
include("suave.jl")
include("vegas.jl")
struct Integral{T}
integral::Vector{Float64}
error::Vector{Float64}
probability::Vector{Float64}
neval::Int64
fail::Int32
nregions::Int32
end
Base.getindex(x::Integral, n::Integer) = getfield(x, n)
function Base.iterate(x::Integral, i=1)
i > 6 && return nothing
return x[i], i + 1
end
_print_fail_extra(io::IO, x::Integral) = nothing
_print_fail_extra(io::IO, x::Integral{<:Divonne}) =
print(io, "\nHint: Try increasing `maxevals` to ", x.neval+x.fail)
function print_fail(io::IO, x::Integral)
print(io, "Note: ")
fail = x.fail
if fail < 0
print(io, "Dimension out of range")
elseif fail == 0
print(io, "The desired accuracy was reached")
elseif fail > 0
print(io, "The accuracy was not met within the maximum number of evaluations")
_print_fail_extra(io, x)
end
end
function Base.show(io::IO, x::Integral)
ncomp = length(x.integral)
println(io, ncomp == 1 ? "Component:" : "Components:")
for i in 1:ncomp
println(io, " ", lpad("$i", ceil(Int, log10(ncomp+1))), ": ", x.integral[i],
" ± ", x.error[i], " (prob.: ", x.probability[i],")")
end
println(io, "Integrand evaluations: ", x.neval)
println(io, "Number of subregions: ", x.nregions)
print_fail(io, x)
end
@inline function dointegrate(x::T) where {T<:Integrand}
# Choose the integrand function wrapper based on the value of `nvec`. This function is
# called only once, so the overhead of the following if should be negligible.
if x.nvec == 1
integrand = ismissing(x.userdata) ? integrand_ptr(x.func) : integrand_ptr_userdata(x.func, x.userdata)
else
integrand = ismissing(x.userdata) ? integrand_ptr_nvec(x.func) : integrand_ptr_nvec_userdata(x.func, x.userdata)
end
integral = Vector{Cdouble}(undef, x.ncomp)
error = Vector{Cdouble}(undef, x.ncomp)
prob = Vector{Cdouble}(undef, x.ncomp)
neval = Ref{Int64}(0)
fail = Ref{Cint}(0)
nregions = Ref{Cint}(0)
dointegrate!(x, integrand, integral, error, prob, neval, fail, nregions)
return Integral{T}(integral, error, prob, neval[], fail[], nregions[])
end
### Other functions, not exported
function cores(n::Integer, p::Integer)
ccall((:cubacores, libcuba), Ptr{Cvoid}, (Ptr{Cint}, Ptr{Cint}), Ref{Cint}(n), Ref{Cint}(p))
return 0
end
function accel(n::Integer, p::Integer)
ccall((:cubaaccel, libcuba), Ptr{Cvoid}, (Ptr{Cint}, Ptr{Cint}), Ref{Cint}(n), Ref{Cint}(p))
return 0
end
function init(f::Ptr{Cvoid}, arg::Ptr{Cvoid})
ccall((:cubainit, libcuba), Ptr{Cvoid}, (Ptr{Cvoid}, Ptr{Cvoid}), f, arg)
return 0
end
function exit(f::Ptr{Cvoid}, arg::Ptr{Cvoid})
ccall((:cubaexit, libcuba), Ptr{Cvoid}, (Ptr{Cvoid}, Ptr{Cvoid}), f, arg)
return 0
end
end # module
| Cuba | https://github.com/giordano/Cuba.jl.git |
|
[
"MIT"
] | 2.3.0 | 3a86c9b6f29a5ed3a5ffd9bffb0d4010e2e91f22 | code | 2801 | ### cuhre.jl --- Integrate with Cuhre method
struct Cuhre{T, D} <: Integrand{T}
func::T
userdata::D
ndim::Int
ncomp::Int
nvec::Int64
rtol::Cdouble
atol::Cdouble
flags::Int
minevals::Int64
maxevals::Int64
key::Int
statefile::String
spin::Ptr{Cvoid}
end
@inline function dointegrate!(x::Cuhre{T, D}, integrand, integral,
error, prob, neval, fail, nregions) where {T, D}
userdata = ismissing(x.userdata) ? x.func : (x.func, x.userdata)
ccall((:llCuhre, libcuba), Cdouble,
(Cint, # ndim
Cint, # ncomp
Ptr{Cvoid}, # integrand
Any, # userdata
Int64, # nvec
Cdouble, # rtol
Cdouble, # atol
Cint, # flags
Int64, # minevals
Int64, # maxevals
Cint, # key
Ptr{Cchar}, # statefile
Ptr{Cvoid}, # spin
Ptr{Cint}, # nregions
Ptr{Int64}, # neval
Ptr{Cint}, # fail
Ptr{Cdouble}, # integral
Ptr{Cdouble}, # error
Ptr{Cdouble}),# prob
# Input
x.ndim, x.ncomp, integrand, userdata, x.nvec, x.rtol, x.atol,
x.flags, x.minevals, x.maxevals, x.key, x.statefile, x.spin,
# Output
nregions, neval, fail, integral, error, prob)
end
"""
cuhre(integrand, ndim=2, ncomp=1[, keywords]) -> integral, error, probability, neval, fail, nregions
Calculate integral of `integrand` over the unit hypercube in `ndim` dimensions
using Cuhre algorithm. `integrand` is a vectorial function with `ncomp`
components. `ncomp` defaults to 1, `ndim` defaults to 2 and must be ≥ 2.
Accepted keywords:
* `userdata`
* `nvec`
* `rtol`
* `atol`
* `flags`
* `minevals`
* `maxevals`
* `key`
* `statefile`
* `spin`
"""
function cuhre(integrand::T, ndim::Integer=2, ncomp::Integer=1;
nvec::Integer=NVEC, rtol::Real=RTOL, atol::Real=ATOL,
flags::Integer=FLAGS, minevals::Real=MINEVALS,
maxevals::Real=MAXEVALS, key::Integer=KEY,
statefile::AbstractString=STATEFILE, spin::Ptr{Cvoid}=SPIN,
abstol=missing, reltol=missing, userdata=missing) where {T}
atol_,rtol_ = tols(atol,rtol,abstol,reltol)
# Cuhre requires "ndim" to be at least 2, even for an integral over a one
# dimensional domain. Instead, we don't prevent users from setting wrong
# "ndim" values like 0 or negative ones.
ndim >=2 || throw(ArgumentError("In Cuhre ndim must be ≥ 2"))
return dointegrate(Cuhre(integrand, userdata, ndim, ncomp, Int64(nvec), Cdouble(rtol_),
Cdouble(atol_), flags, trunc(Int64, minevals),
trunc(Int64, maxevals), key, String(statefile), spin))
end
| Cuba | https://github.com/giordano/Cuba.jl.git |
|
[
"MIT"
] | 2.3.0 | 3a86c9b6f29a5ed3a5ffd9bffb0d4010e2e91f22 | code | 4492 | ### divonne.jl --- Integrate with Divonne method
struct Divonne{T, D} <: Integrand{T}
func::T
userdata::D
ndim::Int
ncomp::Int
nvec::Int64
rtol::Cdouble
atol::Cdouble
flags::Int
seed::Int
minevals::Int64
maxevals::Int64
key1::Int
key2::Int
key3::Int
maxpass::Int
border::Cdouble
maxchisq::Cdouble
mindeviation::Cdouble
ngiven::Int64
ldxgiven::Int
xgiven::Array{Cdouble, 2}
nextra::Int64
peakfinder::Ptr{Cvoid}
statefile::String
spin::Ptr{Cvoid}
end
@inline function dointegrate!(x::Divonne{T, D}, integrand, integral,
error, prob, neval, fail, nregions) where {T, D}
userdata = ismissing(x.userdata) ? x.func : (x.func, x.userdata)
ccall((:llDivonne, libcuba), Cdouble,
(Cint, # ndim
Cint, # ncomp
Ptr{Cvoid}, # integrand
Any, # userdata
Int64, # nvec
Cdouble, # rtol
Cdouble, # atol
Cint, # flags
Cint, # seed
Int64, # minevals
Int64, # maxevals
Cint, # key1
Cint, # key2
Cint, # key3
Cint, # maxpass
Cdouble, # border
Cdouble, # maxchisq
Cdouble, # mindeviation
Int64, # ngiven
Cint, # ldxgiven
Ptr{Cdouble}, # xgiven
Int64, # nextra
Ptr{Cvoid}, # peakfinder
Ptr{Cchar}, # statefile
Ptr{Cvoid}, # spin
Ptr{Cint}, # nregions
Ptr{Int64}, # neval
Ptr{Cint}, # fail
Ptr{Cdouble}, # integral
Ptr{Cdouble}, # error
Ptr{Cdouble}),# prob
# Input
x.ndim, x.ncomp, integrand, userdata, x.nvec, x.rtol,
x.atol, x.flags, x.seed, x.minevals, x.maxevals, x.key1, x.key2,
x.key3, x.maxpass, x.border, x.maxchisq, x.mindeviation, x.ngiven,
x.ldxgiven, x.xgiven, x.nextra, x.peakfinder, x.statefile, x.spin,
# Output
nregions, neval, fail, integral, error, prob)
end
"""
divonne(integrand, ndim=2, ncomp=1[, keywords]) -> integral, error, probability, neval, fail, nregions
Calculate integral of `integrand` over the unit hypercube in `ndim` dimensions
using Divonne algorithm. `integrand` is a vectorial function with `ncomp`
components. `ncomp` defaults to 1, `ndim` defaults to 2 and must be ≥ 2.
Accepted keywords:
* `userdata`
* `nvec`
* `rtol`
* `atol`
* `flags`
* `seed`
* `minevals`
* `maxevals`
* `key1`
* `key2`
* `key3`
* `maxpass`
* `border`
* `maxchisq`
* `mindeviation`
* `ngiven`
* `ldxgiven`
* `xgiven`
* `nextra`
* `peakfinder`
* `statefile`
* `spin`
"""
function divonne(integrand::T, ndim::Integer=2, ncomp::Integer=1;
nvec::Integer=NVEC, rtol::Real=RTOL,
atol::Real=ATOL, flags::Integer=FLAGS,
seed::Integer=SEED, minevals::Real=MINEVALS,
maxevals::Real=MAXEVALS, key1::Integer=KEY1,
key2::Integer=KEY2, key3::Integer=KEY3,
maxpass::Integer=MAXPASS, border::Real=BORDER,
maxchisq::Real=MAXCHISQ,
mindeviation::Real=MINDEVIATION,
ngiven::Integer=NGIVEN, ldxgiven::Integer=LDXGIVEN,
xgiven::Array{Cdouble,2}=zeros(Cdouble, ldxgiven,
ngiven),
nextra::Integer=NEXTRA,
peakfinder::Ptr{Cvoid}=PEAKFINDER,
statefile::AbstractString=STATEFILE,
spin::Ptr{Cvoid}=SPIN, reltol=missing, abstol=missing, userdata=missing) where {T}
atol_,rtol_ = tols(atol,rtol,abstol,reltol)
# Divonne requires "ndim" to be at least 2, even for an integral over a one
# dimensional domain. Instead, we don't prevent users from setting wrong
# "ndim" values like 0 or negative ones.
ndim >=2 || throw(ArgumentError("In Divonne ndim must be ≥ 2"))
return dointegrate(Divonne(integrand, userdata, ndim, ncomp, Int64(nvec), Cdouble(rtol_),
Cdouble(atol_), flags, seed, trunc(Int64, minevals),
trunc(Int64, maxevals), key1, key2, key3, maxpass,
Cdouble(border), Cdouble(maxchisq), Cdouble(mindeviation),
Int64(ngiven), ldxgiven, xgiven, Int64(nextra),
peakfinder, String(statefile), spin))
end
| Cuba | https://github.com/giordano/Cuba.jl.git |
|
[
"MIT"
] | 2.3.0 | 3a86c9b6f29a5ed3a5ffd9bffb0d4010e2e91f22 | code | 2870 | ### suave.jl --- Integrate with Suave method
struct Suave{T, D} <: Integrand{T}
func::T
userdata::D
ndim::Int
ncomp::Int
nvec::Int64
rtol::Cdouble
atol::Cdouble
flags::Int
seed::Int
minevals::Int64
maxevals::Int64
nnew::Int64
nmin::Int64
flatness::Cdouble
statefile::String
spin::Ptr{Cvoid}
end
@inline function dointegrate!(x::Suave{T, D}, integrand, integral,
error, prob, neval, fail, nregions) where {T, D}
userdata = ismissing(x.userdata) ? x.func : (x.func, x.userdata)
ccall((:llSuave, libcuba), Cdouble,
(Cint, # ndim
Cint, # ncomp
Ptr{Cvoid}, # integrand
Any, # userdata
Int64, # nvec
Cdouble, # rtol
Cdouble, # atol
Cint, # flags
Cint, # seed
Int64, # minevals
Int64, # maxevals
Int64, # nnew
Int64, # nmin
Cdouble, # flatness
Ptr{Cchar}, # statefile
Ptr{Cvoid}, # spin
Ptr{Cint}, # nregions
Ptr{Int64}, # neval
Ptr{Cint}, # fail
Ptr{Cdouble}, # integral
Ptr{Cdouble}, # error
Ptr{Cdouble}),# prob
# Input
x.ndim, x.ncomp, integrand, userdata, x.nvec,
x.rtol, x.atol, x.flags, x.seed, x.minevals, x.maxevals,
x.nnew, x.nmin, x.flatness, x.statefile, x.spin,
# Output
nregions, neval, fail, integral, error, prob)
end
"""
suave(integrand, ndim=1, ncomp=1[, keywords]) -> integral, error, probability, neval, fail, nregions
Calculate integral of `integrand` over the unit hypercube in `ndim` dimensions
using Suave algorithm. `integrand` is a vectorial function with `ncomp`
components. `ndim` and `ncomp` default to 1.
Accepted keywords:
* `userdata`
* `nvec`
* `rtol`
* `atol`
* `flags`
* `seed`
* `minevals`
* `maxevals`
* `nnew`
* `nmin`
* `flatness`
* `statefile`
* `spin`
"""
function suave(integrand::T, ndim::Integer=1, ncomp::Integer=1;
nvec::Integer=NVEC, rtol::Real=RTOL, atol::Real=ATOL,
flags::Integer=FLAGS, seed::Integer=SEED,
minevals::Real=MINEVALS, maxevals::Real=MAXEVALS,
nnew::Integer=NNEW, nmin::Integer=NMIN, flatness::Real=FLATNESS,
statefile::AbstractString=STATEFILE, spin::Ptr{Cvoid}=SPIN,
reltol=missing, abstol=missing, userdata=missing) where {T}
atol_,rtol_ = tols(atol,rtol,abstol,reltol)
return dointegrate(Suave(integrand, userdata, ndim, ncomp, Int64(nvec), Cdouble(rtol_),
Cdouble(atol_), flags, seed, trunc(Int64, minevals),
trunc(Int64, maxevals), Int64(nnew), Int64(nmin),
Cdouble(flatness), String(statefile), spin))
end
| Cuba | https://github.com/giordano/Cuba.jl.git |
|
[
"MIT"
] | 2.3.0 | 3a86c9b6f29a5ed3a5ffd9bffb0d4010e2e91f22 | code | 2999 | ### vegas.jl --- Integrate with Vegas method
struct Vegas{T, D} <: Integrand{T}
func::T
userdata::D
ndim::Int
ncomp::Int
nvec::Int64
rtol::Cdouble
atol::Cdouble
flags::Int
seed::Int
minevals::Int64
maxevals::Int64
nstart::Int64
nincrease::Int64
nbatch::Int64
gridno::Int
statefile::String
spin::Ptr{Cvoid}
end
@inline function dointegrate!(x::Vegas{T, D}, integrand, integral,
error, prob, neval, fail, nregions) where {T, D}
userdata = ismissing(x.userdata) ? x.func : (x.func, x.userdata)
ccall((:llVegas, libcuba), Cdouble,
(Cint, # ndim
Cint, # ncomp
Ptr{Cvoid}, # integrand
Any, # userdata
Int64, # nvec
Cdouble, # rtol
Cdouble, # atol
Cint, # flags
Cint, # seed
Int64, # minevals
Int64, # maxevals
Int64, # nstart
Int64, # nincrease
Int64, # nbatch
Cint, # gridno
Ptr{Cchar}, # statefile
Ptr{Cvoid}, # spin
Ptr{Int64}, # neval
Ptr{Cint}, # fail
Ptr{Cdouble}, # integral
Ptr{Cdouble}, # error
Ptr{Cdouble}),# prob
# Input
x.ndim, x.ncomp, integrand, userdata, x.nvec,
x.rtol, x.atol, x.flags, x.seed, x.minevals, x.maxevals,
x.nstart, x.nincrease, x.nbatch, x.gridno, x.statefile, x.spin,
# Output
neval, fail, integral, error, prob)
end
"""
vegas(integrand, ndim=1, ncomp=1[, keywords]) -> integral, error, probability, neval, fail, nregions
Calculate integral of `integrand` over the unit hypercube in `ndim` dimensions
using Vegas algorithm. `integrand` is a vectorial function with `ncomp`
components. `ndim` and `ncomp` default to 1.
Accepted keywords:
* `userdata`
* `nvec`
* `rtol`
* `atol`
* `flags`
* `seed`
* `minevals`
* `maxevals`
* `nstart`
* `nincrease`
* `nbatch`
* `gridno`
* `statefile`
* `spin`
"""
function vegas(integrand::T, ndim::Integer=1, ncomp::Integer=1;
nvec::Integer=NVEC, rtol::Real=RTOL, atol::Real=ATOL,
flags::Integer=FLAGS, seed::Integer=SEED,
minevals::Real=MINEVALS, maxevals::Real=MAXEVALS,
nstart::Integer=NSTART, nincrease::Integer=NINCREASE,
nbatch::Integer=NBATCH, gridno::Integer=GRIDNO,
statefile::AbstractString=STATEFILE, spin::Ptr{Cvoid}=SPIN,
reltol=missing, abstol=missing, userdata=missing) where {T}
atol_,rtol_ = tols(atol,rtol,abstol,reltol)
return dointegrate(Vegas(integrand, userdata, ndim, ncomp, Int64(nvec), Cdouble(rtol_),
Cdouble(atol_), flags, seed, trunc(Int64, minevals),
trunc(Int64, maxevals), Int64(nstart),
Int64(nincrease), Int64(nbatch), gridno,
String(statefile), spin))
end
| Cuba | https://github.com/giordano/Cuba.jl.git |
|
[
"MIT"
] | 2.3.0 | 3a86c9b6f29a5ed3a5ffd9bffb0d4010e2e91f22 | code | 5839 | ### runtests.jl --- Test suite for Cuba.jl
using Cuba
using Test
using Distributed, LinearAlgebra
f1(x, y, z) = sin(x) * cos(y) * exp(z)
f2(x, y, z) = exp(-(x * x + y * y + z * z))
f3(x, y, z) = 1 / (1 - x * y * z)
function integrand1(x, f)
f[1] = f1(x[1], x[2], x[3])
f[2] = f2(x[1], x[2], x[3])
f[3] = f3(x[1], x[2], x[3])
end
# Make sure using "addprocs" doesn't make the program segfault.
addprocs(1)
Cuba.cores(0, 10000)
Cuba.accel(0, 1000)
# Test results and make sure the estimation of error is exact.
ncomp = 3
@testset "$alg" for (alg, atol) in ((vegas, 1e-4), (suave, 1e-3),
(divonne, 1e-4), (cuhre, 1e-8))
# Make sure that using maxevals > typemax(Int32) doesn't result into InexactError.
if alg == divonne
result = @inferred alg(integrand1, 3, ncomp, atol=atol, rtol=1e-8,
flags=0, border=1e-5, maxevals=3000000000)
else
result = @inferred alg(integrand1, 3, ncomp, atol=atol, rtol=1e-8,
flags=0, maxevals=3e9)
end
# Analytic expressions: ((e-1)*(1-cos(1))*sin(1), (sqrt(pi)*erf(1)/2)^3, zeta(3))
for (i, answer) in enumerate((0.6646696797813771, 0.41653838588663805, 1.2020569031595951))
@test result[1][i] ≈ answer atol = result[2][i]
end
end
struct UserData
para::Float64
end
function integrand2(x, f, userdata)
f[1] = f1(x[1], x[2], x[3]) + userdata.para
f[2] = f2(x[1], x[2], x[3]) + userdata.para
f[3] = f3(x[1], x[2], x[3]) + userdata.para
end
@testset "$alg with userdata" for (alg, atol) in ((vegas, 1e-4), (suave, 1e-3),
(divonne, 1e-4), (cuhre, 1e-8))
# Make sure that using maxevals > typemax(Int32) doesn't result into InexactError.
if alg == divonne
result = @inferred alg(integrand2, 3, ncomp, atol=atol, rtol=1e-8,
flags=0, border=1e-5, maxevals=3000000000, userdata=UserData(0.1))
else
result = @inferred alg(integrand2, 3, ncomp, atol=atol, rtol=1e-8,
flags=0, maxevals=3e9, userdata=UserData(0.1))
end
# Analytic expressions: ((e-1)*(1-cos(1))*sin(1)+1.0, (sqrt(pi)*erf(1)/2)^3+1.0, zeta(3)+1.0)
for (i, answer) in enumerate((0.7646696797813771, 0.51653838588663805, 1.3020569031595951))
@test result[1][i] ≈ answer atol = result[2][i]
end
end
@testset "ndim" begin
func(x, f) = (f[] = norm(x))
answer_1d = 1 / 2 # integral of abs(x) in 1D
answer_2d = (8 * asinh(1) + 2^(7 / 2)) / 24 # integral of sqrt(x^2 + y^2) in 2D
@test @inferred(vegas(func))[1][1] ≈ answer_1d rtol = 1e-4
@test @inferred(suave(func))[1][1] ≈ answer_1d rtol = 1e-2
@test @inferred(divonne(func))[1][1] ≈ answer_2d rtol = 1e-4
@test @inferred(cuhre(func))[1][1] ≈ answer_2d rtol = 1e-4
@test_throws ArgumentError cuhre(func, 1)
@test_throws ArgumentError divonne(func, 1)
end
@testset "Integral over infinite domain" begin
func(x) = log(1 + x^2) / (1 + x^2)
result, rest = @inferred cuhre((x, f) -> f[1] = func(x[1] / (1 - x[1])) / (1 - x[1])^2,
atol=1e-12, rtol=1e-10)
@test result[1] ≈ pi * log(2) atol = 3e-12
end
@testset "Vectorization" begin
for alg in (vegas, suave, divonne, cuhre)
result1, err1, _ = @inferred alg((x, f) -> f[1] = x[1] + cos(x[2]) - exp(x[3]), 3)
result2, err2, _ = @inferred alg((x, f) -> f[1, :] .= x[1, :] .+ cos.(x[2, :]) .- exp.(x[3, :]),
3, nvec=10)
@test result1 == result2
@test err1 == err2
result1, err1, _ = @inferred alg((x, f) -> begin
f[1] = sin(x[1])
f[2] = sqrt(x[2])
end, 2, 2)
result2, err2, _ = @inferred alg((x, f) -> begin
f[1, :] .= sin.(x[1, :])
f[2, :] .= sqrt.(x[2, :])
end,
2, 2, nvec=10)
@test result1 == result2
@test err1 == err2
end
end
@testset "Vectorization with userdata" begin
for alg in (vegas, suave, divonne, cuhre)
result1, err1, _ = @inferred alg((x, f, u) -> f[1] = u.para + x[1] + cos(x[2]) - exp(x[3]), 3; userdata=UserData(0.1))
result2, err2, _ = @inferred alg((x, f, u) -> f[1, :] .= u.para .+ x[1, :] .+ cos.(x[2, :]) .- exp.(x[3, :]),
3, nvec=10, userdata=UserData(0.1))
@test result1 == result2
@test err1 == err2
result1, err1, _ = @inferred alg((x, f, u) -> begin
f[1] = sin(x[1]) + u.para
f[2] = sqrt(x[2]) + u.para
end, 2, 2; userdata=UserData(0.1))
result2, err2, _ = @inferred alg((x, f, u) -> begin
f[1, :] .= sin.(x[1, :]) .+ u.para
f[2, :] .= sqrt.(x[2, :]) .+ u.para
end,
2, 2, nvec=10, userdata=UserData(0.1))
@test result1 == result2
@test err1 == err2
end
end
@testset "Show" begin
@test occursin("Note: The desired accuracy was reached",
repr(vegas((x, f) -> f[1] = x[1])))
@test occursin("Note: The accuracy was not met",
repr(suave((x, f) -> f[1] = x[1], atol=1e-12, rtol=1e-12)))
@test occursin("Try increasing `maxevals` to",
repr(divonne((x, f) -> f[1] = exp(x[1]) * cos(x[1]),
atol=1e-9, rtol=1e-9)))
@test occursin("Note: Dimension out of range",
repr(Cuba.dointegrate(Cuba.Cuhre((x, f) -> f[1] = x[1], missing, 1, 1, Int64(Cuba.NVEC),
Cuba.RTOL, Cuba.ATOL, Cuba.FLAGS,
Int64(Cuba.MINEVALS), Int64(Cuba.MAXEVALS),
Cuba.KEY, Cuba.STATEFILE, Cuba.SPIN))))
end
# Make sure these functions don't crash.
Cuba.init(C_NULL, C_NULL)
Cuba.exit(C_NULL, C_NULL)
# Dummy call just to increase code coverage
Cuba.integrand_ptr(Cuba.generic_integrand!)
Cuba.integrand_ptr_userdata(Cuba.generic_integrand!, missing)
Cuba.integrand_ptr_nvec(Cuba.generic_integrand!)
Cuba.integrand_ptr_nvec_userdata(Cuba.generic_integrand!, missing)
| Cuba | https://github.com/giordano/Cuba.jl.git |
|
[
"MIT"
] | 2.3.0 | 3a86c9b6f29a5ed3a5ffd9bffb0d4010e2e91f22 | docs | 6459 | # History of Cuba.jl
## v2.3.0 (2022-09-24)
### New Features
* License was changed to MIT "Expat"
([#40](https://github.com/giordano/Cuba.jl/issues/40), [#43](https://github.com/giordano/Cuba.jl/pull/43)).
* It is now possible to directly pass data to the integrand function with the
`userdata` keyword argument
([#39](https://github.com/giordano/Cuba.jl/pull/39)). See the [example in the
documentation](https://giordano.github.io/Cuba.jl/v2.3.0/#Passing-data-to-the-integrand-function)
for more details.
## v2.2.0 (2021-01-28)
### New Features
* Update to support Cuba v4.2.1.
## v2.1.0 (2020-04-12)
### New Features
* The Cuba binary library is now installed as a [JLL
package](https://docs.binarybuilder.org/stable/jll/). This requires Julia
v1.3 or later version.
v2.0.0 (2019-02-27)
-------------------
### Breaking Changes
* Support for Julia 0.7 was dropped.
### New Features
* When using the package in the REPL, the result of the integration now has a
more informative description about the error flag.
v1.0.0 (2018-08-17)
-------------------
### Breaking Changes
* Support for Julia 0.6 was dropped.
* Keyword arguments `reltol` and `abstol` are now called `rtol` and `atol`,
respectively, to match keywords in `isapprox` function.
* Deprecated functions `llvegas`, `llsuave`, `lldivonne`, and `llcuhre` have
been removed.
### New Features
* The build script has been updated, now the package supports Linux-musl and
FreeBSD systems.
v0.5.0 (2018-05-15)
-------------------
### New Features
* The package now
uses
[`BinaryProvider.jl`](https://github.com/JuliaPackaging/BinaryProvider.jl) to
install a pre-built version of the Cuba library on all platforms.
### Breaking Changes
* Support for Julia 0.5 was dropped
* The default value of argument `ndim` has been changed to 2 in `divonne` and
`cuhre`. These algorithms require the number of dimensions to be at least 2.
Now setting `ndim` to 1 throws an error. Your code will not be affected by
this change if you did not explicitely set `ndim`. See
issue [#14](https://github.com/giordano/Cuba.jl/issues/14).
v0.4.0 (2017-07-08)
-------------------
### Breaking Changes
* Now `vegas`, `suave`, `divonne`, and `cuhre` wrap the 64-bit integers
functions. The 32-bit integers functions are no more available. `llvegas`,
`llsuave`, `lldivonne`, `llcuhre` are deprecated and will be removed at some
point in the future. This change reduces confusion about the function to use.
### New Features
* Now it is possible to vectorize a function in order to speed up its evaluation
(see issue [#10](https://github.com/giordano/Cuba.jl/issues/10) and
PR [#11](https://github.com/giordano/Cuba.jl/pull/11)).
* The result of integration is wrapped in an `Integral` object. This is not a
breaking change because its fields can be iterated over like a tuple, exactly
as before.
v0.3.1 (2017-05-02)
-------------------
### Improvements
* Small performance improvements by avoiding dynamic dispatch in callback
([#6](https://github.com/giordano/Cuba.jl/pull/6)). No user visible change.
v0.3.0 (2017-01-24)
-------------------
### Breaking Changes
* Support for Julia 0.4 was dropped
* Integrators functions with uppercase names were removed. They were deprecated
in v0.2.0
### New Features
* New 64-bit integers functions `llvegas`, `llsuave`, `lldivonne`, `llcuhre` are
provided. They should be used in cases where convergence is not reached
within the ordinary 32-bit integer
range ([#4](https://github.com/giordano/Cuba.jl/issues/4))
v0.2.0 (2016-10-15)
-------------------
This release faces some changes to the user interface. Be aware of them when
upgrading.
### New Features
* `ndim` and `ncomp` arguments can be omitted. In that case they default to 1.
This change is not breaking, old syntax will continue to work.
### Breaking Changes
All integrator functions and some optional keywords have been renamed for more
consistency with the Julia environment. Here is the detailed list:
* Integrators functions have been renamed to lowercase name: `Vegas` to `vegas`,
`Suave` to `suave`, `Divonne` to `divonne`, `Cuhre` to `cuhre`. The uppercase
variants are still available but deprecated, they will be removed at some
point in the future.
* Optional keywords changes: `epsabs` to `abstol`, `epsrel` to `reltol`,
`maxeval` to `maxevals`, `mineval` to `minevals`.
v0.1.4 (2016-08-21)
-------------------
* A new version of Cuba library is downloaded to be compiled on GNU/Linux and
Mac OS systems. There have been only small changes for compatibility with
recent GCC versions, no actual change to the library nor to the Julia wrapper.
Nothing changed for Windows users.
v0.1.3 (2016-08-11)
-------------------
### New Features
* A tagged version of Cuba library is now downloaded when building the package.
This ensures reproducibility of the results of a given `Cuba.jl` release.
v0.1.2 (2016-08-11)
-------------------
### New Features
* Windows (`i686` and `x86_64` architectures) supported
([#2](https://github.com/giordano/Cuba.jl/issues/2))
v0.1.1 (2016-08-05)
-------------------
### Bug Fixes
* Fix warnings in Julia 0.5
v0.1.0 (2016-06-08)
-------------------
### New Features
* Module precompilation enabled
v0.0.5 (2016-04-12)
-------------------
### New Features
* User interface greatly simplified (thanks to Steven G. Johnson;
[#3](https://github.com/giordano/Cuba.jl/issues/3)). This change is
**backward incompatible**. See documentation for details
v0.0.4 (2016-04-10)
-------------------
### New Features
* New complete documentation, available at http://cubajl.readthedocs.org/ and
locally in `docs/` directory
### Breaking Changes
* `verbose` keyword renamed to `flags`
### Bug Fixes
* Number of cores fixed to 0 to avoid crashes when Julia has more than 1 process
* In `Cuhre` and `Divonne`, force `ndim` to be 2 when user sets it to 1
v0.0.3 (2016-04-06)
-------------------
### New Features
* Add `cores`, `accel`, `init`, `exit` function. They will likely not be much
useful for most users, so they are not exported nor documented. See Cuba
manual for information
### Breaking Changes
* Make `ndim` and `ncomp` arguments mandatory
### Bug Fixes
* Fix build script
v0.0.2 (2016-04-04)
-------------------
### Bug Fixes
* Fix path of libcuba
v0.0.1 (2016-04-04)
-------------------
* First release
| Cuba | https://github.com/giordano/Cuba.jl.git |
|
[
"MIT"
] | 2.3.0 | 3a86c9b6f29a5ed3a5ffd9bffb0d4010e2e91f22 | docs | 7122 | # Cuba.jl
| **Documentation** | **Build Status** | **Code Coverage** |
|:---------------------------------------:|:-----------------------------------:|:-------------------------------:|
| [![][docs-stable-img]][docs-stable-url] | [![Build Status][gha-img]][gha-url] | [![][coveral-img]][coveral-url] |
| [![][docs-latest-img]][docs-latest-url] | | [![][codecov-img]][codecov-url] |
Introduction
------------
`Cuba.jl` is a library for multidimensional numerical integration with different
algorithms in [Julia](http://julialang.org/).
This is just a Julia wrapper around the C
[Cuba library](http://www.feynarts.de/cuba/), version 4.2, by **Thomas Hahn**.
All the credits goes to him for the underlying functions, blame me for any
problem with the Julia interface. Feel free to report bugs and make suggestions
at https://github.com/giordano/Cuba.jl/issues.
All algorithms provided by Cuba library are supported in `Cuba.jl`:
* `vegas` (type: Monte Carlo; variance reduction with importance sampling)
* `suave` (type: Monte Carlo; variance reduction with globally adaptive
subdivision + importance sampling)
* `divonne` (type: Monte Carlo or deterministic; variance reduction with
stratified sampling, aided by methods from numerical optimization)
* `cuhre` (type: deterministic; variance reduction with globally adaptive
subdivision)
Integration is performed on the n-dimensional unit hypercube [0, 1]^n. For more
details on the algorithms see the manual included in Cuba library and available
in `deps/usr/share/cuba.pdf` after successful installation of `Cuba.jl`.
`Cuba.jl` is available on all platforms supported by Julia.
Installation
------------
The latest version of `Cuba.jl` is available for Julia 1.3 and later versions,
and can be installed with [Julia built-in package
manager](https://julialang.github.io/Pkg.jl/stable/). In a Julia session, after
entering the package manager mode with `]`, run the command
```julia
pkg> update
pkg> add Cuba
```
Older versions are also available for Julia 0.4-1.2.
Usage
-----
After installing the package, run
``` julia
using Cuba
```
or put this command into your Julia script.
`Cuba.jl` provides the following functions to integrate:
``` julia
vegas(integrand, ndim, ncomp[; keywords...])
suave(integrand, ndim, ncomp[; keywords...])
divonne(integrand, ndim, ncomp[; keywords...])
cuhre(integrand, ndim, ncomp[; keywords...])
```
These functions wrap the 64-bit integers functions provided by the Cuba library.
The only mandatory argument is:
* `function`: the name of the function to be integrated
Optional positional arguments are:
* `ndim`: the number of dimensions of the integration domain. Defaults to 1 in
`vegas` and `suave`, to 2 in `divonne` and `cuhre`. Note: `ndim` must be
at least 2 with the latest two methods.
* `ncomp`: the number of components of the integrand. Defaults to 1
`ndim` and `ncomp` arguments must appear in this order, so you cannot omit
`ndim` but not `ncomp`. `integrand` should be a function `integrand(x, f)`
taking two arguments:
- the input vector `x` of length `ndim`
- the output vector `f` of length `ncomp`, used to set the value of each
component of the integrand at point `x`
Also
[anonymous functions](https://docs.julialang.org/en/v1/manual/functions/#man-anonymous-functions-1)
are allowed as `integrand`. For those familiar with
[`Cubature.jl`](https://github.com/stevengj/Cubature.jl) package, this is the
same syntax used for integrating vector-valued functions.
For example, the integral
```
∫_0^1 cos(x) dx = sin(1) = 0.8414709848078965
```
can be computed with one of the following commands
``` julia
julia> vegas((x, f) -> f[1] = cos(x[1]))
Component:
1: 0.8414910005259609 ± 5.2708169787733e-5 (prob.: 0.028607201257039333)
Integrand evaluations: 13500
Number of subregions: 0
Note: The desired accuracy was reached
julia> suave((x, f) -> f[1] = cos(x[1]))
Component:
1: 0.8411523690658836 ± 8.357995611133613e-5 (prob.: 1.0)
Integrand evaluations: 22000
Number of subregions: 22
Note: The desired accuracy was reached
julia> divonne((x, f) -> f[1] = cos(x[1]))
Component:
1: 0.841468071955942 ± 5.3955070531551656e-5 (prob.: 0.0)
Integrand evaluations: 1686
Number of subregions: 14
Note: The desired accuracy was reached
julia> cuhre((x, f) -> f[1] = cos(x[1]))
Component:
1: 0.8414709848078966 ± 2.2204460420128823e-16 (prob.: 3.443539937576958e-5)
Integrand evaluations: 195
Number of subregions: 2
Note: The desired accuracy was reached
```
The integrating functions `vegas`, `suave`, `divonne`, and `cuhre` return an
`Integral` object whose fields are
``` julia
integral :: Vector{Float64}
error :: Vector{Float64}
probl :: Vector{Float64}
neval :: Int64
fail :: Int32
nregions :: Int32
```
The first three fields are vectors with length `ncomp`, the last three ones are
scalars. The `Integral` object can also be iterated over like a tuple. In
particular, if you assign the output of integration functions to the variable
named `result`, you can access the value of the `i`-th component of the integral
with `result[1][i]` or `result.integral[i]` and the associated error with
`result[2][i]` or `result.error[i]`. The details of other quantities can be
found in Cuba manual.
All other arguments listed in Cuba documentation can be passed as optional
keywords.
### Documentation ###
A more detailed manual of `Cuba.jl`, with many complete examples, is available
at https://giordano.github.io/Cuba.jl/stable/.
Related projects
----------------
There are other Julia packages for multidimenensional numerical integration:
* [`Cubature.jl`](https://github.com/stevengj/Cubature.jl)
* [`HCubature.jl`](https://github.com/stevengj/HCubature.jl)
* [`NIntegration.jl`](https://github.com/pabloferz/NIntegration.jl)
License
-------
The Cuba.jl package is released under the terms of the MIT "Expat" License.
Note that the binary library [Cuba](http://www.feynarts.de/cuba/) is distributed
with the GNU Lesser General Public License. The original author of Cuba.jl is
Mosè Giordano. If you use this library for your work, please credit Thomas Hahn
(citable papers about Cuba library:
<https://ui.adsabs.harvard.edu/abs/2005CoPhC.168...78H/abstract> and
<https://ui.adsabs.harvard.edu/abs/2015JPhCS.608a2066H/abstract>).
[docs-latest-img]: https://img.shields.io/badge/docs-latest-blue.svg
[docs-latest-url]: https://giordano.github.io/Cuba.jl/latest/
[docs-stable-img]: https://img.shields.io/badge/docs-stable-blue.svg
[docs-stable-url]: https://giordano.github.io/Cuba.jl/stable/
[gha-img]: https://github.com/giordano/Cuba.jl/workflows/CI/badge.svg
[gha-url]: https://github.com/giordano/Cuba.jl/actions?query=workflow%3ACI
[coveral-img]: https://coveralls.io/repos/github/giordano/Cuba.jl/badge.svg?branch=master
[coveral-url]: https://coveralls.io/github/giordano/Cuba.jl?branch=master
[codecov-img]: https://codecov.io/gh/giordano/Cuba.jl/branch/master/graph/badge.svg
[codecov-url]: https://codecov.io/gh/giordano/Cuba.jl
| Cuba | https://github.com/giordano/Cuba.jl.git |
|
[
"MIT"
] | 2.3.0 | 3a86c9b6f29a5ed3a5ffd9bffb0d4010e2e91f22 | docs | 48215 | Cuba
====
```@meta
DocTestSetup = quote
using Cuba
end
```
Introduction
------------
[`Cuba.jl`](https://github.com/giordano/Cuba.jl) is a
[Julia](http://julialang.org/) library for multidimensional [numerical
integration](https://en.wikipedia.org/wiki/Numerical_integration) of real-valued
functions of real arguments, using different algorithms.
This is just a Julia wrapper around the C [Cuba
library](http://www.feynarts.de/cuba/), version 4.2, by **Thomas Hahn**. All
the credits goes to him for the underlying functions, blame me for any problem
with the Julia interface.
All algorithms provided by Cuba library are supported in `Cuba.jl`:
* [Vegas](https://en.wikipedia.org/wiki/VEGAS_algorithm):
| Basic integration method | Type | [Variance reduction](https://en.wikipedia.org/wiki/Variance_reduction) |
| --------------------------------------------------------------------------------------- | -------------------------------------------------------------------- | ------------------------------------------------------------------------ |
| [Sobol quasi-random sample](https://en.wikipedia.org/wiki/Sobol_sequence) | [Monte Carlo](https://en.wikipedia.org/wiki/Monte_Carlo_integration) | [importance sampling](https://en.wikipedia.org/wiki/Importance_sampling) |
| [Mersenne Twister pseudo-random sample](https://en.wikipedia.org/wiki/Mersenne_Twister) | " | " |
| [Ranlux pseudo-random sample](http://arxiv.org/abs/hep-lat/9309020) | " | " |
* Suave
| Basic integration method | Type | Variance reduction |
| ------------------------------------- | ----------- | ---------------------------------------------------------------------------------------------------------- |
| Sobol quasi-random sample | Monte Carlo | globally [adaptive subdivision](https://en.wikipedia.org/wiki/Adaptive_quadrature) and importance sampling |
| Mersenne Twister pseudo-random sample | " | " |
| Ranlux pseudo-random sample | " | " |
* Divonne
| Basic integration method | Type | Variance reduction |
| ------------------------------------- | ------------- | --------------------------------------------------------------------------------------------------------------------- |
| Korobov quasi-random sample | Monte Carlo | [stratified sampling](https://en.wikipedia.org/wiki/Stratified_sampling) aided by methods from numerical optimization |
| Sobol quasi-random sample | " | " |
| Mersenne Twister pseudo-random sample | " | " |
| Ranlux pseudo-random sample | " | " |
| cubature rules | deterministic | " |
* Cuhre
| Basic integration method | Type | Variance reduction |
| ------------------------ | ------------- | ----------------------------- |
| cubature rules | deterministic | globally adaptive subdivision |
For more details on the algorithms see the manual included in Cuba
library and available in `deps/usr/share/cuba.pdf` after successful
installation of `Cuba.jl`.
Integration is always performed on the ``n``-dimensional [unit
hypercube](https://en.wikipedia.org/wiki/Hypercube) ``[0, 1]^{n}``.
!!! tip "Integrate over different domains"
If you want to compute an integral over a different set, you have to scale the
integrand function in order to have an equivalent integral on ``[0, 1]^{n}`` using
[substitution
rules](https://en.wikipedia.org/wiki/Integration_by_substitution). For example,
recall that in one dimension
```math
\int_{a}^{b} f(x)\,\mathrm{d}x = \int_{0}^{1} f(a + (b - a) y) (b -
a)\,\mathrm{d}y
```
where the final ``(b - a)`` is the one-dimensional version of the Jacobian.
Integration over a semi-infinite or an inifite domain is a bit trickier, but you
can follow [this
advice](http://ab-initio.mit.edu/wiki/index.php/Cubature#Infinite_intervals)
from Steven G. Johnson: to compute an integral over a semi-infinite interval,
you can perform the change of variables ``x=a+y/(1-y)``:
```math
\int_{a}^{\infty} f(x)\,\mathrm{d}x = \int_{0}^{1}
f\left(a + \frac{y}{1 - y}\right)\frac{1}{(1 - y)^2}\,\mathrm{d}y
```
For an infinite interval, you can perform the change of variables ``x=(2y -
1)/((1 - y)y)``:
```math
\int_{-\infty}^{\infty} f(x)\,\mathrm{d}x = \int_{0}^{1}
f\left(\frac{2y - 1}{(1 - y)y}\right)\frac{2y^2 - 2y + 1}{(1 -
y)^2y^2}\,\mathrm{d}y
```
In addition, recall that for an [even
function](https://en.wikipedia.org/wiki/Even_and_odd_functions#Even_functions)
``\int_{-\infty}^{\infty} f(x)\,\mathrm{d}x =
2\int_{0}^{\infty}f(x)\,\mathrm{d}x``, while the integral of an [odd
function](https://en.wikipedia.org/wiki/Even_and_odd_functions#Odd_functions)
over the infinite interval ``(-\infty, \infty)`` is zero.
All this generalizes straightforwardly to more than one dimension. In
[Examples](@ref) section you can find the computation of a 3-dimensional
[integral with non-constant boundaries](#Integral-with-non-constant-boundaries-1)
using `Cuba.jl` and two [integrals over infinite domains](#Integrals-over-Infinite-Domains-1).
`Cuba.jl` is available on all platforms supported by Julia.
Installation
------------
The latest version of `Cuba.jl` is available for Julia 1.3 and later versions,
and can be installed with [Julia built-in package
manager](https://docs.julialang.org/en/v1/stdlib/Pkg/). In a Julia session run
the commands
```julia
pkg> update
pkg> add Cuba
```
Older versions are also available for Julia 0.4-1.2.
Usage
-----
After installing the package, run
```julia
using Cuba
```
or put this command into your Julia script.
`Cuba.jl` provides the following functions to integrate:
```@docs
vegas
suave
divonne
cuhre
```
Large parts of the following sections are borrowed from Cuba manual.
Refer to it for more information on the details.
`Cuba.jl` wraps the 64-bit integers functions of Cuba library, in order
to push the range of certain counters to its full extent. In detail, the
following arguments:
- for Vegas: `nvec`, `minevals`, `maxevals`, `nstart`, `nincrease`,
`nbatch`, `neval`,
- for Suave: `nvec`, `minevals`, `maxevals`, `nnew`, `nmin`, `neval`,
- for Divonne: `nvec`, `minevals`, `maxevals`, `ngiven`, `nextra`,
`neval`,
- for Cuhre: `nvec`, `minevals`, `maxevals`, `neval`,
are passed to the Cuba library as 64-bit integers, so they are limited
to be at most
```jldoctest
julia> typemax(Int64)
9223372036854775807
```
There is no way to overcome this limit. See the following sections for
the meaning of each argument.
### Arguments
The only mandatory argument of integrator functions is:
- `integrand` (type: `Function`): the function to be integrated
Optional positional arguments are:
- `ndim` (type: `Integer`): the number of dimensions of the
integratation domain. If omitted, defaults to 1 in `vegas` and
`suave`, to 2 in `divonne` and `cuhre`. Note: `ndim` must be at
least 2 with the latest two methods.
- `ncomp` (type: `Integer`): the number of components of the
integrand. Default to 1 if omitted
`integrand` should be a function `integrand(x, f)` taking two arguments:
- the input vector `x` of length `ndim`
- the output vector `f` of length `ncomp`, used to set the value of
each component of the integrand at point `x`
`x` and `f` are matrices with dimensions `(ndim, nvec)` and
`(ncomp, nvec)`, respectively, when `nvec` > 1. See the
[Vectorization](@ref) section below for more information.
Also [anonymous
functions](https://docs.julialang.org/en/v1/manual/functions/#man-anonymous-functions-1)
are allowed as `integrand`. For those familiar with `Cubature.jl` package, this
is the same syntax used for integrating vector-valued functions.
For example, the integral
```math
\int_{0}^{1} \cos (x) \,\mathrm{d}x = \sin(1) = 0.8414709848078965
```
can be computed with one of the following commands
```jldoctest
julia> vegas((x, f) -> f[1] = cos(x[1]))
Component:
1: 0.841491000525961 ± 5.2708169786483034e-5 (prob.: 0.028607201258847748)
Integrand evaluations: 13500
Number of subregions: 0
Note: The desired accuracy was reached
julia> suave((x, f) -> f[1] = cos(x[1]))
Component:
1: 0.8413748866950329 ± 7.772872640815592e-5 (prob.: 1.0)
Integrand evaluations: 23000
Number of subregions: 23
Note: The desired accuracy was reached
julia> divonne((x, f) -> f[1] = cos(x[1]))
Component:
1: 0.841468071955942 ± 5.3955070531551656e-5 (prob.: 1.1102230246251565e-16)
Integrand evaluations: 1686
Number of subregions: 14
Note: The desired accuracy was reached
julia> cuhre((x, f) -> f[1] = cos(x[1]))
Component:
1: 0.8414709848078967 ± 2.304857594221477e-15 (prob.: 4.869900880782919e-5)
Integrand evaluations: 195
Number of subregions: 2
Note: The desired accuracy was reached
```
In section [Examples](@ref) you can find more complete examples. Note that `x`
and `f` are both arrays with type `Float64`, so `Cuba.jl` can be used to
integrate real-valued functions of real arguments. See how to work with a
[complex integrand](#Complex-integrand-1).
!!! note "Limit on number of components"
The Cuba C library has a hard limit on the number of components that you can use for
your integrand function, which is set at compile time. The build of the library
currently used by `Cuba.jl` has this limit set to 1024, the default. If you use an
integrand with a larger number of components, the integration will fail, which you can
detect by checking the `fail` field of the `Integral` output object, which will be
non-zero, see the "[Output](@ref)" section below.
If you need to integrate functions with components larger than 1024, consider using
other packages which don't have this limitation, like
[`HCubature.jl`](https://github.com/JuliaMath/HCubature.jl) or
[`Cubature.jl`](https://github.com/JuliaMath/Cubature.jl).
!!! note "Compatibility"
If you used `Cuba.jl` until version 0.0.4, be aware that the
user interface has been reworked in version 0.0.5 in a backward
incompatible way.
### Optional Keywords
All other arguments required by Cuba integrator routines can be passed
as optional keywords. `Cuba.jl` uses some reasonable default values in
order to enable users to invoke integrator functions with a minimal set
of arguments. Anyway, if you want to make sure future changes to some
default values of keywords will not affect your current script,
explicitely specify the value of the keywords.
#### Common Keywords
These are optional keywords common to all functions:
- `nvec` (type: `Integer`, default: `1`): the maximum number of points to be
given to the integrand routine in each invocation. Usually this is 1 but if
the integrand can profit from e.g. Single Instruction Multiple Data (SIMD)
vectorization, a larger value can be chosen. See [Vectorization](@ref)
section.
- `rtol` (type: `Real`, default: `1e-4`), and `atol` (type: `Real`,
default: `1e-12`): the requested relative
(``\varepsilon_{\text{rel}}``) and absolute
(``\varepsilon_{\text{abs}}``) accuracies. The integrator tries to
find an estimate ``\hat{I}`` for the integral ``I`` which for every
component ``c`` fulfills ``|\hat{I}_c - I_c|\leq
\max(\varepsilon_{\text{abs}}, \varepsilon_{\text{rel}} |I_c|)``.
- `flags` (type: `Integer`, default: `0`): flags governing the
integration:
- Bits 0 and 1 are taken as the verbosity level, i.e. `0` to `3`,
unless the `CUBAVERBOSE` environment variable contains an even
higher value (used for debugging).
Level `0` does not print any output, level `1` prints "reasonable"
information on the progress of the integration, level `2` also echoes
the input parameters, and level `3` further prints the subregion results
(if applicable).
- Bit 2 = `0`: all sets of samples collected on a subregion during
the various iterations or phases contribute to the final result.
Bit 2 = `1`, only the last (largest) set of samples is used in
the final result.
- (Vegas and Suave only)
Bit 3 = `0`, apply additional smoothing to the importance
function, this moderately improves convergence for many
integrands.
Bit 3 = `1`, use the importance function without smoothing, this
should be chosen if the integrand has sharp edges.
- Bit 4 = `0`, delete the state file (if one is chosen) when the
integration terminates successfully.
Bit 4 = `1`, retain the state file.
- (Vegas only)
Bit 5 = `0`, take the integrator's state from the state file,
if one is present.
Bit 5 = `1`, reset the integrator's state even if a state file
is present, i.e. keep only the grid. Together with Bit 4 this
allows a grid adapted by one integration to be used for another
integrand.
- Bits 8--31 =: `level` determines the random-number generator.
To select e.g. last samples only and verbosity level 2, pass
`6 = 4 + 2` for the flags.
- `seed` (type: `Integer`, default: `0`): the seed for the
pseudo-random-number generator. This keyword is not available for
[`cuhre`](@ref). The random-number generator is chosen as
follows:
| `seed` | `level` (bits 8--31 of `flags`) | Generator |
| -------- | ------------------------------- | -------------------------------- |
| zero | N/A | Sobol (quasi-random) |
| non-zero | zero | Mersenne Twister (pseudo-random) |
| non-zero | non-zero | Ranlux (pseudo-random) |
Ranlux implements Marsaglia and Zaman's 24-bit RCARRY algorithm
with generation period ``p``, i.e. for every 24 generated numbers
used, another ``p - 24`` are skipped. The luxury level is encoded in
`level` as follows:
- Level 1 (``p = 48``): very long period, passes the gap test but
fails spectral test.
- Level 2 (``p = 97``): passes all known tests, but theoretically
still defective.
- Level 3 (``p = 223``): any theoretically possible correlations
have very small chance of being observed.
- Level 4 (``p = 389``): highest possible luxury, all 24 bits
chaotic.
Levels 5--23 default to 3, values above 24 directly specify the
period ``p``. Note that Ranlux's original level 0, (mis)used for
selecting Mersenne Twister in Cuba, is equivalent to `level` = `24`.
- `minevals` (type: `Real`, default: `0`): the minimum number of
integrand evaluations required
- `maxevals` (type: `Real`, default: `1000000`): the (approximate)
maximum number of integrand evaluations allowed
- `statefile` (type: `AbstractString`, default: `""`): a filename for
storing the internal state. To not store the internal state, put
`""` (empty string, this is the default) or `C_NULL` (C null
pointer).
Cuba can store its entire internal state (i.e. all the information
to resume an interrupted integration) in an external file. The state
file is updated after every iteration. If, on a subsequent
invocation, a Cuba routine finds a file of the specified name, it
loads the internal state and continues from the point it left off.
Needless to say, using an existing state file with a different
integrand generally leads to wrong results.
This feature is useful mainly to define "check-points" in
long-running integrations from which the calculation can be
restarted.
Once the integration reaches the prescribed accuracy, the state file
is removed, unless bit 4 of `flags` (see above) explicitly requests
that it be kept.
- `spin` (type: `Ptr{Void}`, default: `C_NULL`): this is the
placeholder for the "spinning cores" pointer. `Cuba.jl` does not
support parallelization, so this keyword should not have a value
different from `C_NULL`.
- `userdata` (arbitrary julia type, default: `missing`): user data
passed to the integrand. See [Passing data to the integrand function](@ref) for a usage example.
!!! note "Compatibility"
The keyword `userdata` is only supported in `Cuba.jl` after the version 2.3.0.
#### Vegas-Specific Keywords
These optional keywords can be passed only to [`vegas`](@ref):
- `nstart` (type: `Integer`, default: `1000`): the number of integrand
evaluations per iteration to start with
- `nincrease` (type: `Integer`, default: `500`): the increase in the
number of integrand evaluations per iteration
- `nbatch` (type: `Integer`, default: `1000`): the batch size for
sampling
Vegas samples points not all at once, but in batches of size
`nbatch`, to avoid excessive memory consumption. `1000` is a
reasonable value, though it should not affect performance too much
- `gridno` (type: `Integer`, default: `0`): the slot in the internal
grid table.
It may accelerate convergence to keep the grid accumulated during
one integration for the next one, if the integrands are reasonably
similar to each other. Vegas maintains an internal table with space
for ten grids for this purpose. The slot in this grid is specified
by `gridno`.
If a grid number between `1` and `10` is selected, the grid is not
discarded at the end of the integration, but stored in the
respective slot of the table for a future invocation. The grid is
only re-used if the dimension of the subsequent integration is the
same as the one it originates from.
In repeated invocations it may become necessary to flush a slot in
memory, in which case the negative of the grid number should be set.
#### Suave-Specific Keywords
These optional keywords can be passed only to [`suave`](@ref):
- `nnew` (type: `Integer`, default: `1000`): the number of new
integrand evaluations in each subdivision
- `nmin` (type: `Integer`, default: `2`): the minimum number of
samples a former pass must contribute to a subregion to be
considered in that region's compound integral value. Increasing
`nmin` may reduce jumps in the ``\chi^2`` value
- `flatness` (type: `Real`, default: `.25`): the type of norm used to
compute the fluctuation of a sample. This determines how prominently
"outliers", i.e. individual samples with a large fluctuation,
figure in the total fluctuation, which in turn determines how a
region is split up. As suggested by its name, `flatness` should be
chosen large for "flat" integrands and small for "volatile"
integrands with high peaks. Note that since `flatness` appears in
the exponent, one should not use too large values (say, no more than
a few hundred) lest terms be truncated internally to prevent
overflow.
#### Divonne-Specific Keywords
These optional keywords can be passed only to [`divonne`](@ref):
- `key1` (type: `Integer`, default: `47`): determines sampling in the
partitioning phase: `key1` ``= 7, 9, 11, 13`` selects the cubature
rule of degree `key1`. Note that the degree-11 rule is available
only in 3 dimensions, the degree-13 rule only in 2 dimensions.
For other values of `key1`, a quasi-random sample of ``n_1 =
|\verb|key1||`` points is used, where the sign of `key1` determines
the type of sample,
- `key1` ``> 0``, use a Korobov quasi-random sample,
- `key1` ``< 0``, use a "standard" sample (a Sobol quasi-random
sample if `seed` ``= 0``, otherwise a pseudo-random sample).
- `key2` (type: `Integer`, default: `1`): determines sampling in
the final integration phase:
`key2` ``= 7, 9, 11, 13`` selects the cubature rule of degree
`key2`. Note that the degree-11 rule is available only in 3
dimensions, the degree-13 rule only in 2 dimensions.
For other values of `key2`, a quasi-random sample is used, where
the sign of `key2` determines the type of sample,
- `key2` ``> 0``, use a Korobov quasi-random sample,
- `key2` ``< 0``, use a "standard" sample (see description of
`key1` above),
and ``n_2 = |\verb|key2||`` determines the number of points,
- ``n_2\geq 40``, sample ``n_2`` points,
- ``n_2 < 40``, sample ``n_2\,n_{\text{need}}`` points, where
``n_{\text{need}}`` is the number of points needed to reach
the prescribed accuracy, as estimated by Divonne from the
results of the partitioning phase
- `key3` (type: `Integer`, default: `1`): sets the strategy for the
refinement phase:
`key3` ``= 0``, do not treat the subregion any further.
`key3` ``= 1``, split the subregion up once more.
Otherwise, the subregion is sampled a third time with `key3`
specifying the sampling parameters exactly as `key2` above.
- `maxpass` (type: `Integer`, default: `5`): controls the thoroughness
of the partitioning phase: The partitioning phase terminates when
the estimated total number of integrand evaluations (partitioning
plus final integration) does not decrease for `maxpass` successive
iterations.
A decrease in points generally indicates that Divonne discovered new
structures of the integrand and was able to find a more effective
partitioning. `maxpass` can be understood as the number of
"safety" iterations that are performed before the partition is
accepted as final and counting consequently restarts at zero
whenever new structures are found.
- `border` (type: `Real`, default: `0.`): the width of the border of
the integration region. Points falling into this border region will
not be sampled directly, but will be extrapolated from two samples
from the interior. Use a non-zero `border` if the integrand function
cannot produce values directly on the integration boundary
- `maxchisq` (type: `Real`, default: `10.`): the ``\chi^2`` value a
single subregion is allowed to have in the final integration phase.
Regions which fail this ``\chi^2`` test and whose sample averages
differ by more than `mindeviation` move on to the refinement phase.
- `mindeviation` (type: `Real`, default: `0.25`): a bound, given as
the fraction of the requested error of the entire integral, which
determines whether it is worthwhile further examining a region that
failed the ``\chi^2`` test. Only if the two sampling averages obtained
for the region differ by more than this bound is the region further
treated.
- `ngiven` (type: `Integer`, default: `0`): the number of points in
the `xgiven` array
- `ldxgiven` (type: `Integer`, default: `0`): the leading dimension of
`xgiven`, i.e. the offset between one point and the next in memory
- `xgiven` (type: `AbstractArray{Real}`, default:
`zeros(Cdouble, ldxgiven, ngiven)`): a list of points where the
integrand might have peaks. Divonne will consider these points when
partitioning the integration region. The idea here is to help the
integrator find the extrema of the integrand in the presence of very
narrow peaks. Even if only the approximate location of such peaks is
known, this can considerably speed up convergence.
- `nextra` (type: `Integer`, default: `0`): the maximum number of
extra points the peak-finder subroutine will return. If `nextra` is
zero, `peakfinder` is not called and an arbitrary object may be
passed in its place, e.g. just 0
- `peakfinder` (type: `Ptr{Void}`, default: `C_NULL`): the peak-finder
subroutine
#### Cuhre-Specific Keyword
This optional keyword can be passed only to [`cuhre`](@ref):
- `key` (type: `Integer`, default: `0`): chooses the basic integration
rule:
`key` ``= 7, 9, 11, 13`` selects the cubature rule of degree `key`.
Note that the degree-11 rule is available only in 3 dimensions, the
degree-13 rule only in 2 dimensions.
For other values, the default rule is taken, which is the degree-13
rule in 2 dimensions, the degree-11 rule in 3 dimensions, and the
degree-9 rule otherwise.
### Output
The integrating functions [`vegas`](@ref), [`suave`](@ref), [`divonne`](@ref),
and [`cuhre`](@ref) return an `Integral` object whose fields are
```.julia
integral :: Vector{Float64}
error :: Vector{Float64}
probl :: Vector{Float64}
neval :: Int64
fail :: Int32
nregions :: Int32
```
The first three fields are arrays with length `ncomp`, the last three
ones are scalars. The `Integral` object can also be iterated over like a
tuple. In particular, if you assign the output of integrator functions
to the variable named `result`, you can access the value of the `i`-th
component of the integral with `result[1][i]` or `result.integral[i]`
and the associated error with `result[2][i]` or `result.error[i]`.
- `integral` (type: `Vector{Float64}`, with `ncomp` components): the
integral of `integrand` over the unit hypercube
- `error` (type: `Vector{Float64}`, with `ncomp` components): the
presumed absolute error for each component of `integral`
- `probability` (type: `Vector{Float64}`, with `ncomp` components):
the ``\chi^2`` -probability (not the ``\chi^2`` -value itself!) that
`error` is not a reliable estimate of the true integration error. To
judge the reliability of the result expressed through `prob`,
remember that it is the null hypothesis that is tested by the
``\chi^2`` test, which is that `error` is a reliable estimate. In
statistics, the null hypothesis may be rejected only if `prob` is
fairly close to unity, say `prob` ``>.95``
- `neval` (type: `Int64`): the actual number of integrand evaluations
needed
- `fail` (type: `Int32`): an error flag:
- `fail` = `0`, the desired accuracy was reached
- `fail` = `-1`, dimension out of range
- `fail` > `0`, the accuracy goal was not met within the allowed
maximum number of integrand evaluations. While Vegas, Suave, and
Cuhre simply return `1`, Divonne can estimate the number of
points by which `maxevals` needs to be increased to reach the
desired accuracy and returns this value.
- `nregions` (type: `Int32`): the actual number of subregions needed
(always `0` in [`vegas`](@ref))
Vectorization
-------------
Vectorization means evaluating the integrand function for several points
at once. This is also known as [Single Instruction Multiple
Data](https://en.wikipedia.org/wiki/SIMD) (SIMD) paradigm and is
different from ordinary parallelization where independent threads are
executed concurrently. It is usually possible to employ vectorization on
top of parallelization.
`Cuba.jl` cannot automatically vectorize the integrand function, of course, but
it does pass (up to) `nvec` points per integrand call ([Common Keywords](@ref)).
This value need not correspond to the hardware vector length --computing several
points in one call can also make sense e.g. if the computations have significant
intermediate results in common.
When `nvec` > 1, the input `x` is a matrix of dimensions `(ndim, nvec)`, while
the output `f` is a matrix with dimensions `(ncomp, nvec)`. Vectorization can be
used to evaluate more quickly the integrand function, for example by exploiting
parallelism, thus speeding up computation of the integral. See the section
[Vectorized Function](@ref) below for an example of a vectorized funcion.
!!! note "Disambiguation"
The `nbatch` argument of [`vegas`](@ref) is related in purpose but
not identical to `nvec`. It internally partitions the sampling done by
Vegas but has no bearing on the number of points given to the integrand.
On the other hand, it it pointless to choose `nvec` > `nbatch` for
Vegas.
Examples
--------
### One dimensional integral
The integrand of
```math
\int_{0}^{1} \frac{\log(x)}{\sqrt{x}} \,\mathrm{d}x
```
has an algebraic-logarithmic divergence for ``x = 0``, but the integral is
convergent and its value is ``-4``. `Cuba.jl` integrator routines can
handle this class of functions and you can easily compute the numerical
approximation of this integral using one of the following commands:
```jldoctest
julia> vegas( (x,f) -> f[1] = log(x[1])/sqrt(x[1]))
Component:
1: -3.9981623937128448 ± 0.00044066437168409865 (prob.: 0.28430529712907515)
Integrand evaluations: 1007500
Number of subregions: 0
Note: The accuracy was not met within the maximum number of evaluations
julia> suave( (x,f) -> f[1] = log(x[1])/sqrt(x[1]))
Component:
1: -4.000246664970977 ± 0.00039262438882794375 (prob.: 1.0)
Integrand evaluations: 50000
Number of subregions: 50
Note: The desired accuracy was reached
julia> divonne( (x,f) -> f[1] = log(x[1])/sqrt(x[1]), atol = 1e-8, rtol = 1e-8)
Component:
1: -3.999999899620808 ± 2.1865962888458758e-7 (prob.: 0.0)
Integrand evaluations: 1002059
Number of subregions: 1582
Note: The accuracy was not met within the maximum number of evaluations
Hint: Try increasing `maxevals` to 4884287
julia> cuhre( (x,f) -> f[1] = log(x[1])/sqrt(x[1]))
Component:
1: -4.000000355067185 ± 0.00033954840286260385 (prob.: 0.0)
Integrand evaluations: 5915
Number of subregions: 46
Note: The desired accuracy was reached
```
### Vector-valued integrand
Consider the integral
```math
\int\limits_{\Omega}
\boldsymbol{f}(x,y,z)\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z
```
where ``\Omega = [0, 1]^{3}`` and
```math
\boldsymbol{f}(x,y,z) = \left(\sin(x)\cos(y)\exp(z), \,\exp(-(x^2 + y^2 +
z^2)), \,\frac{1}{1 - xyz}\right)
```
In this case it is more convenient to write a simple Julia script to
compute the above integral
```jldoctest
julia> using Cuba, SpecialFunctions
julia> function integrand(x, f)
f[1] = sin(x[1])*cos(x[2])*exp(x[3])
f[2] = exp(-(x[1]^2 + x[2]^2 + x[3]^2))
f[3] = 1/(1 - prod(x))
end
integrand (generic function with 1 method)
julia> result, err = cuhre(integrand, 3, 3, atol=1e-12, rtol=1e-10);
julia> answer = ((ℯ-1)*(1-cos(1))*sin(1), (sqrt(pi)*erf(1)/2)^3, zeta(3));
julia> for i = 1:3
println("Component ", i)
println(" Result of Cuba: ", result[i], " ± ", err[i])
println(" Exact result: ", answer[i])
println(" Actual error: ", abs(result[i] - answer[i]))
end
Component 1
Result of Cuba: 0.6646696797813743 ± 1.0083313461375621e-13
Exact result: 0.6646696797813771
Actual error: 2.886579864025407e-15
Component 2
Result of Cuba: 0.4165383858806458 ± 2.9328672381493003e-11
Exact result: 0.41653838588663805
Actual error: 5.992262241960589e-12
Component 3
Result of Cuba: 1.202056903164971 ± 1.195855757269273e-10
Exact result: 1.2020569031595951
Actual error: 5.375921929839933e-12
```
### Integral with non-constant boundaries
The integral
```math
\int_{-y}^{y}\int_{0}^{z}\int_{0}^{\pi}
\cos(x)\sin(y)\exp(z)\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z
```
has non-constant boundaries. By applying the substitution rule
repeatedly, you can scale the integrand function and get this equivalent
integral over the fixed domain ``\Omega = [0, 1]^{3}``
```math
\int\limits_{\Omega} 2\pi^{3}yz^2 \cos(\pi yz(2x - 1)) \sin(\pi yz)
\exp(\pi z)\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z
```
that can be computed with `Cuba.jl` using the following Julia script
```jldoctest
julia> using Cuba
julia> function integrand(x, f)
f[1] = 2pi^3*x[2]*x[3]^2*cos(pi*x[2]*x[3]*(2*x[1] - 1.0))*
sin(pi*x[2]*x[3])*exp(pi*x[3])
end
integrand (generic function with 1 method)
julia> result, err = cuhre(integrand, 3, 1, atol=1e-12, rtol=1e-10);
julia> answer = pi*ℯ^pi - (4ℯ^pi - 4)/5;
julia> begin
println("Result of Cuba: ", result[1], " ± ", err[1])
println("Exact result: ", answer)
println("Actual error: ", abs(result[1] - answer))
end
Result of Cuba: 54.98607586826152 ± 5.460606620717379e-9
Exact result: 54.98607586789537
Actual error: 3.6614977716453723e-10
```
### Integrals over Infinite Domains
`Cuba.jl` assumes always as integration domain the hypercube ``[0, 1]^n``, but
we have seen that using integration by substitution we can calculate integrals
over different domains as well. In the [Introduction](@ref) we also proposed two
useful substitutions that can be employed to change an infinite or semi-infinite
domain into a finite one.
As a first example, consider the following integral with a semi-infinite
domain:
```math
\int_{0}^{\infty}\frac{\log(1 + x^2)}{1 + x^2}\,\mathrm{d}x
```
whose exact result is ``\pi\log 2``. This can be computed as follows:
```jldoctest
julia> using Cuba
julia> # The function we want to integrate over [0, ∞).
julia> func(x) = log(1 + x^2)/(1 + x^2)
func (generic function with 1 method)
julia> # Scale the function in order to integrate over [0, 1].
julia> function integrand(x, f)
f[1] = func(x[1]/(1 - x[1]))/(1 - x[1])^2
end
integrand (generic function with 1 method)
julia> result, err = cuhre(integrand, atol = 1e-12, rtol = 1e-10);
julia> answer = pi*log(2);
julia> begin
println("Result of Cuba: ", result[1], " ± ", err[1])
println("Exact result: ", answer)
println("Actual error: ", abs(result[1] - answer))
end
Result of Cuba: 2.1775860903056903 ± 2.153947023352171e-10
Exact result: 2.177586090303602
Actual error: 2.0881074647149944e-12
```
Now we want to calculate this integral, over an infinite domain
```math
\int_{-\infty}^{\infty} \frac{1 - \cos x}{x^2}\,\mathrm{d}x
```
which gives ``\pi``. You can calculate the result with the code below.
Note that integrand function has value ``1/2`` for ``x=0``, but you have to
inform Julia about this.
```jldoctest
julia> using Cuba
julia> # The function we want to integrate over (-∞, ∞).
julia> func(x) = x==0 ? 0.5*one(x) : (1 - cos(x))/x^2
func (generic function with 1 method)
julia> # Scale the function in order to integrate over [0, 1].
julia> function integrand(x, f)
f[1] = func((2*x[1] - 1)/x[1]/(1 - x[1])) *
(2*x[1]^2 - 2*x[1] + 1)/x[1]^2/(1 - x[1])^2
end
integrand (generic function with 1 method)
julia> result, err = cuhre(integrand, atol = 1e-7, rtol = 1e-7);
julia> answer = float(pi);
julia> begin
println("Result of Cuba: ", result[1], " ± ", err[1])
println("Exact result: ", answer)
println("Actual error: ", abs(result[1] - answer))
end
Result of Cuba: 3.1415928900554886 ± 2.050669142055499e-6
Exact result: 3.141592653589793
Actual error: 2.3646569546897922e-7
```
### Complex integrand
As already explained, `Cuba.jl` operates on real quantities, so if you want to
integrate a complex-valued function of complex arguments you have to treat
complex quantities as 2-component arrays of real numbers. For example, if you
do not remember [Euler's
formula](https://en.wikipedia.org/wiki/Euler%27s_formula), you can compute this
simple integral
```math
\int_{0}^{\pi/2} \exp(\mathrm{i} x)\,\mathrm{d}x
```
with the following code
```jldoctest
julia> using Cuba
julia> function integrand(x, f)
# Complex integrand, scaled to integrate in [0, 1].
tmp = cis(x[1]*pi/2)*pi/2
# Assign to two components of "f" the real
# and imaginary part of the integrand.
f[1], f[2] = reim(tmp)
end
integrand (generic function with 1 method)
julia> result = cuhre(integrand, 2, 2);
julia> begin
println("Result of Cuba: ", complex(result[1]...))
println("Exact result: ", complex(1.0, 1.0))
end
Result of Cuba: 1.0000000000000002 + 1.0000000000000002im
Exact result: 1.0 + 1.0im
```
### Passing data to the integrand function
Cuba Library allows program written in C and Fortran to pass extra data
to the integrand function with `userdata` argument. This is useful, for
example, when the integrand function depends on changing parameters.
For example, the [cumulative distribution
function](https://en.wikipedia.org/wiki/Cumulative_distribution_function)
``F(x;k)`` of [chi-squared
distribution](https://en.wikipedia.org/wiki/Chi-squared_distribution) is
defined by
```math
F(x; k) = \int_{0}^{x} \frac{t^{k/2 - 1}\exp(-t/2)}{2^{k/2}\Gamma(k/2)}
\,\mathrm{d}t = \frac{x}{2^{k/2}\Gamma(k/2)} \int_{0}^{1} (xt)^{k/2 - 1}\exp(-xt/2)
\,\mathrm{d}t
```
The integrand depends on user-defined parameters ``x`` and ``k``. One option is
passing a tuple `(x, k)` to the integrand using the `userdata` keyword argument
in [`vegas`](@ref), [`suave`](@ref), [`divonne`](@ref) or [`cuhre`](@ref).
The following julia script uses this trick to compute ``F(x = \pi; k)`` for
different ``k`` and compares the result with more precise values, based on the
analytic expression of the cumulative distribution function, provided by
[GSL.jl](https://github.com/jiahao/GSL.jl) package.
```julia
julia> using Cuba, GSL, Printf, SpecialFunctions
julia> function integrand(t, f, userdata)
# Chi-squared probability density function, without constant denominator.
# The result of integration will be divided by that factor.
# userdata is a tuple (x, k/2), see below
x, k = userdata
f[1] = (t[1]*x)^(k/2 - 1.0)*exp(-(t[1]*x)/2)
end
julia> chi2cdf(x::Real, k::Real) = x*cuhre(integrand; userdata = (x, k))[1][1]/(2^(k/2)*gamma(k/2))
chi2cdf (generic function with 1 method)
julia> x = float(pi);
julia> begin
@printf("Result of Cuba: %.6f %.6f %.6f %.6f %.6f\n",
map((k) -> chi2cdf(x, k), collect(1:5))...)
@printf("Exact result: %.6f %.6f %.6f %.6f %.6f\n",
map((k) -> cdf_chisq_P(x, k), collect(1:5))...)
end
Result of Cuba: 0.923681 0.792120 0.629694 0.465584 0.321833
Exact result: 0.923681 0.792120 0.629695 0.465584 0.321833
```
An alternative solution to pass the user-defined data is implementing the integrand
as a nested inner function. The inner function can access any variable visible
in its [scope](https://docs.julialang.org/en/v1/manual/variables-and-scoping/).
```julia
julia> using Cuba, GSL, Printf, SpecialFunctions
julia> function chi2cdf(x::Real, k::Real)
k2 = k/2
# Chi-squared probability density function, without constant denominator.
# The result of integration will be divided by that factor.
function chi2pdf(t::Float64)
# "k2" is taken from the outside.
return t^(k2 - 1.0)*exp(-t/2)
end
# Neither "x" is passed directly to the integrand function,
# but is visible to it. "x" is used to scale the function
# in order to actually integrate in [0, 1].
x*cuhre((t,f) -> f[1] = chi2pdf(t[1]*x))[1][1]/(2^k2*gamma(k2))
end
chi2cdf (generic function with 1 method)
julia> x = float(pi);
julia> begin
@printf("Result of Cuba: %.6f %.6f %.6f %.6f %.6f\n",
map((k) -> chi2cdf(x, k), collect(1:5))...)
@printf("Exact result: %.6f %.6f %.6f %.6f %.6f\n",
map((k) -> cdf_chisq_P(x, k), collect(1:5))...)
end
Result of Cuba: 0.923681 0.792120 0.629694 0.465584 0.321833
Exact result: 0.923681 0.792120 0.629695 0.465584 0.321833
```
### Vectorized Function
Consider the integral
```math
\int\limits_{\Omega} \prod_{i=1}^{10} \cos(x_{i})
\,\mathrm{d}\boldsymbol{x} = \sin(1)^{10} = 0.1779883\dots
```
where ``\Omega = [0, 1]^{10}`` and ``\boldsymbol{x} = (x_{1}, \dots,
x_{10})`` is a 10-dimensional vector. A simple way to compute this
integral is the following:
```.julia
julia> using Cuba, BenchmarkTools
julia> cuhre((x, f) -> f[] = prod(cos.(x)), 10)
Component:
1: 0.17798706658707045 ± 1.070799596273229e-6 (prob.: 0.2438374079277991)
Integrand evaluations: 7815
Number of subregions: 2
Note: The desired accuracy was reached
julia> @benchmark cuhre((x, f) -> f[] = prod(cos.(x)), 10)
BenchmarkTools.Trial: 2448 samples with 1 evaluation.
Range (min … max): 1.714 ms … 7.401 ms ┊ GC (min … max): 0.00% … 75.31%
Time (median): 1.820 ms ┊ GC (median): 0.00%
Time (mean ± σ): 2.035 ms ± 858.367 μs ┊ GC (mean ± σ): 9.08% ± 14.32%
██▅▅▄▁ ▁ ▁
███████▇▆▆▄▁▄▁▁▁▄▃▃▅▁▁▃▃▁▃▃▁▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▁▁▅▆▇▇██▇ █
1.71 ms Histogram: log(frequency) by time 5.97 ms <
Memory estimate: 2.03 MiB, allocs estimate: 39078.
```
We can use vectorization in order to speed up evaluation of the
integrand function.
```julia
julia> function fun_vec(x,f)
f[1,:] .= 1.0
for j in 1:size(x,2)
for i in 1:size(x, 1)
f[1, j] *= cos(x[i, j])
end
end
end
fun_vec (generic function with 1 method)
julia> cuhre(fun_vec, 10, nvec = 1000)
Component:
1: 0.17798706658707045 ± 1.070799596273229e-6 (prob.: 0.2438374079277991)
Integrand evaluations: 7815
Number of subregions: 2
Note: The desired accuracy was reached
julia> @benchmark cuhre(fun_vec, 10, nvec = 1000)
BenchmarkTools.Trial: 4971 samples with 1 evaluation.
Range (min … max): 951.837 μs … 1.974 ms ┊ GC (min … max): 0.00% … 0.00%
Time (median): 981.597 μs ┊ GC (median): 0.00%
Time (mean ± σ): 1.001 ms ± 69.101 μs ┊ GC (mean ± σ): 0.00% ± 0.00%
█▂ ▁▆▇▁▂▂▂▆▅▂▁▁▁▁ ▁
██▇▆█████████████████████▇██▇████▇▆▇█▆▆▆▇▆▅▆▆█▇▅▅▅▅▄▅▆▄▅▄▂▃▂ █
952 μs Histogram: log(frequency) by time 1.24 ms <
Memory estimate: 1.59 KiB, allocs estimate: 39.
```
A further speed up can be gained by running the `for` loop in parallel
with `Threads.@threads`. For example, running Julia with 4 threads:
```julia
julia> function fun_par(x,f)
f[1,:] .= 1.0
Threads.@threads for j in 1:size(x,2)
for i in 1:size(x, 1)
f[1, j] *= cos(x[i, j])
end
end
end
fun_par (generic function with 1 method)
julia> cuhre(fun_par, 10, nvec = 1000)
Component:
1: 0.17798706658707045 ± 1.070799596273229e-6 (prob.: 0.2438374079277991)
Integrand evaluations: 7815
Number of subregions: 2
Note: The desired accuracy was reached
julia> @benchmark cuhre(fun_par, 10, nvec = 1000)
BenchmarkTools.Trial: 6198 samples with 1 evaluation.
Range (min … max): 587.456 μs … 7.076 ms ┊ GC (min … max): 0.00% … 0.00%
Time (median): 798.933 μs ┊ GC (median): 0.00%
Time (mean ± σ): 801.498 μs ± 149.908 μs ┊ GC (mean ± σ): 0.18% ± 1.50%
▇█▇▃
▂▄▆▃▂▃▃▂▂▂▂▂▂▁▂▁▁▁▁▂▄▅▃▂▅██████▆▆▆▆▆█▇▇▆▄▄▃▂▃▂▂▂▂▂▂▁▂▁▁▁▁▁▁▁▁ ▃
587 μs Histogram: frequency by time 1.03 ms <
Memory estimate: 19.31 KiB, allocs estimate: 228.
```
Performance
-----------
`Cuba.jl` cannot ([yet?](https://github.com/giordano/Cuba.jl/issues/1))
take advantage of parallelization capabilities of Cuba Library.
Nonetheless, it has performances comparable with equivalent native C or
Fortran codes based on Cuba library when `CUBACORES` environment
variable is set to `0` (i.e., multithreading is disabled). The following
is the result of running the benchmark present in `test` directory on a
64-bit GNU/Linux system running Julia 0.7.0-beta2.3 (commit 83ce9c7524)
equipped with an Intel(R) Core(TM) i7-4700MQ CPU. The C and FORTRAN 77
benchmark codes have been compiled with GCC 7.3.0.
```
$ CUBACORES=0 julia -e 'using Pkg; import Cuba; include(joinpath(dirname(dirname(pathof(Cuba))), "benchmarks", "benchmark.jl"))'
[ Info: Performance of Cuba.jl:
0.257360 seconds (Vegas)
0.682703 seconds (Suave)
0.329552 seconds (Divonne)
0.233190 seconds (Cuhre)
[ Info: Performance of Cuba Library in C:
0.268249 seconds (Vegas)
0.682682 seconds (Suave)
0.319553 seconds (Divonne)
0.234099 seconds (Cuhre)
[ Info: Performance of Cuba Library in Fortran:
0.233532 seconds (Vegas)
0.669809 seconds (Suave)
0.284515 seconds (Divonne)
0.195740 seconds (Cuhre)
```
Of course, native C and Fortran codes making use of Cuba Library
outperform `Cuba.jl` when higher values of `CUBACORES` are used, for
example:
```
$ CUBACORES=1 julia -e 'using Pkg; cd(Pkg.dir("Cuba")); include("test/benchmark.jl")'
[ Info: Performance of Cuba.jl:
0.260080 seconds (Vegas)
0.677036 seconds (Suave)
0.342396 seconds (Divonne)
0.233280 seconds (Cuhre)
[ Info: Performance of Cuba Library in C:
0.096388 seconds (Vegas)
0.574647 seconds (Suave)
0.150003 seconds (Divonne)
0.102817 seconds (Cuhre)
[ Info: Performance of Cuba Library in Fortran:
0.094413 seconds (Vegas)
0.556084 seconds (Suave)
0.139606 seconds (Divonne)
0.107335 seconds (Cuhre)
```
`Cuba.jl` internally fixes `CUBACORES` to 0 in order to prevent from
forking `julia` processes that would only slow down calculations eating
up the memory, without actually taking advantage of concurrency.
Furthemore, without this measure, adding more Julia processes with
`addprocs()` would only make the program segfault.
Related projects
----------------
There are other Julia packages for multidimenensional numerical integration:
* [`Cubature.jl`](https://github.com/stevengj/Cubature.jl)
* [`HCubature.jl`](https://github.com/stevengj/HCubature.jl)
* [`NIntegration.jl`](https://github.com/pabloferz/NIntegration.jl)
Development
-----------
`Cuba.jl` is developed on GitHub: <https://github.com/giordano/Cuba.jl>.
Feel free to report bugs and make suggestions at
<https://github.com/giordano/Cuba.jl/issues>.
### History
The ChangeLog of the package is available in
[NEWS.md](https://github.com/giordano/Cuba.jl/blob/master/NEWS.md) file
in top directory. There have been some breaking changes from time to
time, beware of them when upgrading the package.
License
-------
The Cuba.jl package is licensed under the GNU Lesser General Public
License, the same as [Cuba library](http://www.feynarts.de/cuba/). The
original author is Mosè Giordano.
Credits
-------
If you use this library for your work, please credit Thomas Hahn.
Citable papers about Cuba Library:
- Hahn, T. 2005, Computer Physics Communications, 168, 78.
DOI:[10.1016/j.cpc.2005.01.010](http://dx.doi.org/10.1016/j.cpc.2005.01.010).
arXiv:[hep-ph/0404043](http://arxiv.org/abs/hep-ph/0404043).
Bibcode:[2005CoPhC.168…78H](http://adsabs.harvard.edu/abs/2005CoPhC.168...78H).
- Hahn, T. 2015, Journal of Physics Conference Series, 608, 012066.
DOI:[10.1088/1742-6596/608/1/012066](http://dx.doi.org/10.1088/1742-6596/608/1/012066).
arXiv:[1408.6373](http://arxiv.org/abs/1408.6373).
Bibcode:[2015JPhCS.608a2066H](http://adsabs.harvard.edu/abs/2015JPhCS.608a2066H).
| Cuba | https://github.com/giordano/Cuba.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | code | 2453 | using BenchmarkTools
using EnvironmentalTransport
using EarthSciMLBase, EarthSciData
using ModelingToolkit, DomainSets
using Dates
starttime = datetime2unix(DateTime(2022, 5, 1))
function setup_advection_simulator(lonres, latres, stencil)
@parameters lon=0.0 lat=0.0 lev=1.0 t
endtime = datetime2unix(DateTime(2022, 5, 1, 1, 0, 5))
geosfp, updater = GEOSFP("0.25x0.3125_NA"; dtype = Float64,
coord_defaults = Dict(:lon => 0.0, :lat => 0.0, :lev => 1.0))
domain = DomainInfo(
[partialderivatives_δxyδlonlat,
partialderivatives_δPδlev_geosfp(geosfp)],
constIC(16.0, t ∈ Interval(starttime, endtime)),
constBC(16.0,
lon ∈ Interval(deg2rad(-129), deg2rad(-61)),
lat ∈ Interval(deg2rad(11), deg2rad(59)),
lev ∈ Interval(1, 3)))
function emissions(t)
@variables c(t) = 1.0
D = Differential(t)
ODESystem([D(c) ~ lat + lon + lev], t, name = :emissions)
end
emis = emissions(t)
csys = couple(emis, domain, geosfp, updater)
op = AdvectionOperator(100.0, stencil, ZeroGradBC())
csys = couple(csys, op)
sim = Simulator(csys, [deg2rad(lonres), deg2rad(latres), 1])
scimlop = EarthSciMLBase.get_scimlop(only(csys.ops), sim)
scimlop, init_u(sim)
end
suite = BenchmarkGroup()
suite["Advection Simulator"] = BenchmarkGroup(["advection", "simulator"])
suite["Advection Simulator"]["in-place"] = BenchmarkGroup()
suite["Advection Simulator"]["out-of-place"] = BenchmarkGroup()
for stencil in [l94_stencil, ppm_stencil]
suite["Advection Simulator"]["in-place"][stencil] = BenchmarkGroup()
suite["Advection Simulator"]["out-of-place"][stencil] = BenchmarkGroup()
for (lonres, latres) in ((0.625, 0.5), (0.3125, 0.25))
@info "setting up $lonres x $latres with $stencil"
op, u = setup_advection_simulator(lonres, latres, stencil)
suite["Advection Simulator"]["in-place"][stencil]["$lonres x $latres (N=$(length(u)))"] = @benchmarkable $(op)(
$(u[:]), $(u[:]), [0.0], $starttime)
suite["Advection Simulator"]["out-of-place"][stencil]["$lonres x $latres (N=$(length(u)))"] = @benchmarkable $(op)(
$(u[:]), [0.0], $starttime)
end
end
#op, u = setup_advection_simulator(0.1, 0.1, l94_stencil)
#@profview op(u[:], u[:], [0.0], starttime)
tune!(suite)
results = run(suite, verbose = true)
BenchmarkTools.save("output.json", median(results))
| EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | code | 981 | using EnvironmentalTransport
using Documenter
DocMeta.setdocmeta!(EnvironmentalTransport, :DocTestSetup, :(using EnvironmentalTransport); recursive=true)
makedocs(;
modules=[EnvironmentalTransport],
authors="EarthSthSciML authors and contributors",
repo="https://github.com/EarthSciML/EnvironmentalTransport.jl/blob/{commit}{path}#{line}",
sitename="EnvironmentalTransport.jl",
format=Documenter.HTML(;
prettyurls=get(ENV, "CI", "false") == "true",
canonical="https://EarthSciML.github.io/EnvironmentalTransport.jl",
edit_link="main",
assets=String[],
repolink="https://github.com/EarthSciML/EnvironmentalTransport.jl"
),
pages=[
"Home" => "index.md",
"Advection" => "advection.md",
"Puff Model" => "puff.md",
"API" => "api.md",
"🔗 Benchmarks" => "benchmarks.md",
],
)
deploydocs(;
repo="github.com/EarthSciML/EnvironmentalTransport.jl",
devbranch="main",
)
| EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | code | 1654 | module EarthSciDataExt
using DocStringExtensions
import EarthSciMLBase
using EarthSciMLBase: param_to_var, ConnectorSystem, CoupledSystem, get_coupletype
using EarthSciData: GEOSFPCoupler
using EnvironmentalTransport: PuffCoupler, AdvectionOperator
function EarthSciMLBase.couple2(p::PuffCoupler, g::GEOSFPCoupler)
p, g = p.sys, g.sys
p = param_to_var(p, :v_lon, :v_lat, :v_lev)
g = param_to_var(g, :lon, :lat, :lev)
ConnectorSystem([
p.lon ~ g.lon
p.lat ~ g.lat
p.lev ~ g.lev
p.v_lon ~ g.A3dyn₊U
p.v_lat ~ g.A3dyn₊V
p.v_lev ~ g.A3dyn₊OMEGA
], p, g)
end
"""
$(SIGNATURES)
Couple the advection operator into the CoupledSystem.
This function mutates the operator to add the windfield variables.
There must already be a source of wind data in the coupled system for this to work.
Currently the only valid source of wind data is `EarthSciData.GEOSFP`.
"""
function EarthSciMLBase.couple(c::CoupledSystem, op::AdvectionOperator)::CoupledSystem
found = 0
for sys in c.systems
if EarthSciMLBase.get_coupletype(sys) == GEOSFPCoupler
found += 1
op.vardict = Dict(
"lon" => sys.A3dyn₊U,
"lat" => sys.A3dyn₊V,
"lev" => sys.A3dyn₊OMEGA
)
end
end
if found == 0
error("Could not find a source of wind data in the coupled system. Valid sources are currently {EarthSciData.GEOSFP}.")
elseif found > 1
error("Found multiple sources of wind data in the coupled system. Valid sources are currently {EarthSciData.GEOSFP}")
end
push!(c.ops, op)
c
end
end
| EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | code | 424 | module EnvironmentalTransport
using DocStringExtensions
using SciMLOperators
using LinearAlgebra
using SciMLBase: NullParameters
using ModelingToolkit: t, D, get_unit, getdefault, ODESystem, @variables, @parameters,
@constants, get_variables, substitute
using SciMLBase: terminate!
using EarthSciMLBase
include("advection_stencils.jl")
include("boundary_conditions.jl")
include("advection.jl")
include("puff.jl")
end
| EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | code | 6380 | export AdvectionOperator
#=
An advection kernel for a 4D array, where the first dimension is the state variables
and the next three dimensions are the spatial dimensions.
=#
function advection_kernel_4d(u, stencil, vs, Δs, Δt, idx, p = NullParameters())
lpad, rpad = stencil_size(stencil)
offsets = ((CartesianIndex(0, lpad, 0, 0), CartesianIndex(0, rpad, 0, 0)),
(CartesianIndex(0, 0, lpad, 0), CartesianIndex(0, 0, rpad, 0)),
(CartesianIndex(0, 0, 0, lpad), CartesianIndex(0, 0, 0, rpad))
)
du = zero(eltype(u))
@inbounds for i in eachindex(vs, Δs, offsets)
v, Δ, (l, r) = vs[i], Δs[i], offsets[i]
uu = @view u[(idx - l):(idx + r)]
du += stencil(uu, v, Δt, Δ; p)
end
du
end
function advection_kernel_4d_builder(stencil, v_fs, Δ_fs)
function advect_f(u, idx, Δt, t, p = NullParameters())
vs = get_vs(v_fs, idx, t)
Δs = get_Δs(Δ_fs, idx, t)
advection_kernel_4d(u, stencil, vs, Δs, Δt, idx, p)
end
end
function get_vs(v_fs, i, j, k, t)
(
(v_fs[1](i, j, k, t), v_fs[1](i + 1, j, k, t)),
(v_fs[2](i, j, k, t), v_fs[2](i, j + 1, k, t)),
(v_fs[3](i, j, k, t), v_fs[3](i, j, k + 1, t))
)
end
get_vs(v_fs, idx::CartesianIndex{4}, t) = get_vs(v_fs, idx[2], idx[3], idx[4], t)
get_Δs(Δ_fs, i, j, k, t) = (Δ_fs[1](i, j, k, t), Δ_fs[2](i, j, k, t), Δ_fs[3](i, j, k, t))
get_Δs(Δ_fs, idx::CartesianIndex{4}, t) = get_Δs(Δ_fs, idx[2], idx[3], idx[4], t)
#=
A function to create an advection operator for a 4D array,
Arguments:
* `u_prototype`: A prototype array of the same size and type as the input array.
* `stencil`: The stencil operator, e.g. `l94_stencil` or `ppm_stencil`.
* `v_fs`: A vector of functions to get the wind velocity at a given place and time.
The function signature should be `v_fs(i, j, k, t)`.
* `Δ_fs`: A vector of functions to get the grid spacing at a given place and time.
The function signature should be `Δ_fs(i, j, k, t)`.
* `Δt`: The time step size, which is assumed to be fixed.
* `bc_type`: The boundary condition type, e.g. `ZeroGradBC()`.
=#
function advection_op(u_prototype, stencil, v_fs, Δ_fs, Δt, bc_type;
p = NullParameters())
sz = size(u_prototype)
v_fs = tuple(v_fs...)
Δ_fs = tuple(Δ_fs...)
adv_kernel = advection_kernel_4d_builder(stencil, v_fs, Δ_fs)
function advection(u, p, t) # Out-of-place
u = bc_type(reshape(u, sz...))
du = adv_kernel.((u,), CartesianIndices(u), (Δt,), (t,), (p,))
reshape(du, :)
end
function advection(du, u, p, t) # In-place
u = bc_type(reshape(u, sz...))
du = reshape(du, sz...)
du .= adv_kernel.((u,), CartesianIndices(u), (Δt,), (t,), (p,))
end
FunctionOperator(advection, reshape(u_prototype, :), p = p)
end
"Get a value from the x-direction velocity field."
function vf_x(args1, args2)
i, j, k, t = args1
data_f, grid1, grid2, grid3, Δ = args2
x1 = grid1[min(i, length(grid1))] - Δ / 2 # Staggered grid
x2 = grid2[j]
x3 = grid3[k]
data_f(t, x1, x2, x3)
end
"Get a value from the y-direction velocity field."
function vf_y(args1, args2)
i, j, k, t = args1
data_f, grid1, grid2, grid3, Δ = args2
x1 = grid1[i]
x2 = grid2[min(j, length(grid2))] - Δ / 2 # Staggered grid
x3 = grid3[k]
data_f(t, x1, x2, x3)
end
"Get a value from the z-direction velocity field."
function vf_z(args1, args2)
i, j, k, t = args1
data_f, grid1, grid2, grid3, Δ = args2
x1 = grid1[i]
x2 = grid2[j]
x3 = k > 1 ? grid3[min(k, length(grid3))] - Δ / 2 : grid3[k]
data_f(t, x1, x2, x3) # Staggered grid
end
tuplefunc(vf) = (i, j, k, t) -> vf((i, j, k, t))
"""
$(SIGNATURES)
Return a function that gets the wind velocity at a given place and time for the given `varname`.
`data_f` should be a function that takes a time and three spatial coordinates and returns the value of
the wind speed in the direction indicated by `varname`.
"""
function get_vf(sim, varname::AbstractString, data_f)
if varname ∈ ("lon", "x")
vf = Base.Fix2(
vf_x, (data_f, sim.grid[1], sim.grid[2], sim.grid[3], sim.Δs[1]))
return tuplefunc(vf)
elseif varname ∈ ("lat", "y")
vf = Base.Fix2(
vf_y, (data_f, sim.grid[1], sim.grid[2], sim.grid[3], sim.Δs[2]))
return tuplefunc(vf)
elseif varname == "lev"
vf = Base.Fix2(
vf_z, (data_f, sim.grid[1], sim.grid[2], sim.grid[3], sim.Δs[3]))
return tuplefunc(vf)
else
error("Invalid variable name $(varname).")
end
end
"function to get grid deltas."
function Δf(args1, args2)
i, j, k, t = args1
tff, Δ, grid1, grid2, grid3 = args2
c1, c2, c3 = grid1[i], grid2[j], grid3[k]
Δ / tff(t, c1, c2, c3)
end
"""
$(SIGNATURES)
Return a function that gets the grid spacing at a given place and time for the given `varname`.
"""
function get_Δ(sim::EarthSciMLBase.Simulator, varname::AbstractString)
pvaridx = findfirst(
isequal(varname), String.(Symbol.(EarthSciMLBase.pvars(sim.domaininfo))))
tff = sim.tf_fs[pvaridx]
tuplefunc(Base.Fix2(
Δf, (tff, sim.Δs[pvaridx], sim.grid[1], sim.grid[2], sim.grid[3])))
end
"""
$(SIGNATURES)
Create an `EarthSciMLBase.Operator` that performs advection.
Advection is performed using the given `stencil` operator
(e.g. `l94_stencil` or `ppm_stencil`).
`p` is an optional parameter set to be used by the stencil operator.
`bc_type` is the boundary condition type, e.g. `ZeroGradBC()`.
"""
mutable struct AdvectionOperator <: EarthSciMLBase.Operator
Δt::Any
stencil::Any
bc_type::Any
vardict::Any
function AdvectionOperator(Δt, stencil, bc_type)
new(Δt, stencil, bc_type, nothing)
end
end
function EarthSciMLBase.get_scimlop(op::AdvectionOperator, sim::Simulator, u = nothing)
u = isnothing(u) ? init_u(sim) : u
pvars = EarthSciMLBase.pvars(sim.domaininfo)
pvarstrs = [String(Symbol(pv)) for pv in pvars]
v_fs = []
Δ_fs = []
for varname in pvarstrs
data_f = sim.obs_fs[sim.obs_fs_idx[op.vardict[varname]]]
push!(v_fs, get_vf(sim, varname, data_f))
push!(Δ_fs, get_Δ(sim, varname))
end
scimlop = advection_op(u, op.stencil, v_fs, Δ_fs, op.Δt, op.bc_type, p = sim.p)
cache_operator(scimlop, u[:])
end
| EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | code | 6549 | export l94_stencil, ppm_stencil, upwind1_stencil, upwind2_stencil
"""
$(SIGNATURES)
L94 advection in 1-D (Lin et al., 1994)
* ϕ is the scalar field at the current time step, it should be a vector of length 5.
* U is the velocity at both edges of the central grid cell, it should be a vector of length 2.
* Δt is the length of the time step.
* Δz is the grid spacing.
The output will be time derivative of the central index (i.e. index 3)
of the ϕ vector (i.e. dϕ/dt).
(The output is dependent on the Courant number, which depends on Δt, so Δt needs to be
an input to the function.)
"""
function l94_stencil(ϕ, U, Δt, Δz; kwargs...)
δϕ1(i) = ϕ[i] - ϕ[i - 1]
Δϕ1_avg(i) = (δϕ1(i) + δϕ1(i + 1)) / 2.0
## Monotonicity slope limiter
ϕ1_min(i) = minimum((ϕ[i - 1], ϕ[i], ϕ[i + 1]))
ϕ1_max(i) = maximum((ϕ[i - 1], ϕ[i], ϕ[i + 1]))
function Δϕ1_mono(i)
sign(Δϕ1_avg(i)) *
minimum((abs(Δϕ1_avg(i)), 2 * (ϕ[i] - ϕ1_min(i)),
2 * (ϕ1_max(i) - ϕ[i])))
end
courant(i) = U[i] * Δt / Δz
function FLUX(i)
ifelse(U[i] >= 0,
(U[i] * (ϕ[i + 1] + Δϕ1_mono(i + 1) * (1 - courant(i)) / 2.0)),
(U[i] * (ϕ[i + 2] - Δϕ1_mono(i + 2) * (1 + courant(i)) / 2.0)))
end
ϕ2(i) = -(FLUX(i - 1) - FLUX(i - 2)) / Δz
ϕ2(3)
end
" Return the left and right stencil size of the L94 stencil. "
stencil_size(s::typeof(l94_stencil)) = (2, 2)
"""
$(SIGNATURES)
PPM advection in 1-D (Collela and Woodward, 1984)
* ϕ is the scalar field at the current time step, it should be a vector of length 8 (3 cells on the left, the central cell, and 4 cells on the right).
* U is the velocity at both edges of the central grid cell, it should be a vector of length 2.
* Δt is the length of the time step.
* Δz is the grid spacing.
The output will be time derivative of the central index (i.e. index 4)
of the ϕ vector (i.e. dϕ/dt).
(The output is dependent on the Courant number, which depends on Δt, so Δt needs to be
an input to the function.)
"""
function ppm_stencil(ϕ, U, Δt, Δz; kwargs...)
ϵ = 0.01
η⁽¹⁾ = 20
η⁽²⁾ = 0.05
## Edge value calculation
δϕ(i) = 1 / 2 * (ϕ[i + 1] - ϕ[i - 1])
function δₘϕ(i)
ifelse((ϕ[i + 1] - ϕ[i]) * (ϕ[i] - ϕ[i - 1]) > 0,
min(
abs(δϕ(i - 1)), 2 * abs(ϕ[i] - ϕ[i - 1]), 2 * abs(ϕ[i + 1] - ϕ[i])) *
sign(δϕ(i - 1)),
zero(eltype(ϕ))
)
end
ϕ₊½(i) = 1 / 2 * (ϕ[i + 1] + ϕ[i]) - 1 / 6 * (δₘϕ(i + 1) - δₘϕ(i))
## Discontinuity detection
δ²ϕ(i) = 1 / (6 * Δz^2) * (ϕ[i + 1] - 2 * ϕ[i] + ϕ[i - 1])
function η_tilde(i)
ifelse(
-δ²ϕ(i + 1) * δ²ϕ(i - 1) * abs(ϕ[i + 1] - ϕ[i - 1]) -
ϵ * min(abs(ϕ[i + 1]), abs(ϕ[i - 1])) > 0,
-(δ²ϕ(i + 1) - δ²ϕ(i - 1)) * (Δz^2) / (ϕ[i + 1] - ϕ[i - 1]),
zero(eltype(ϕ))
)
end
η(i) = clamp(η⁽¹⁾ * (η_tilde(i) - η⁽²⁾), 0, 1)
ϕLᵈ(i) = ϕ[i] + 1 / 2 * δₘϕ(i)
ϕRᵈ(i) = ϕ[i + 1] + 1 / 2 * δₘϕ(i + 1)
ϕL₀(i) = ϕ₊½(i) * (1 - η(i)) + ϕLᵈ(i) * η(i)
ϕR₀(i) = ϕ₊½(i + 1) * (1 - η(i)) + ϕRᵈ(i) * η(i)
## Monotonicity examination
function ϕL(i)
ifelse((ϕR₀(i) - ϕ[i]) * (ϕ[i] - ϕL₀(i)) <= 0,
ϕ[i],
ifelse(
(ϕR₀(i) - ϕL₀(i)) * (ϕ[i] - 1 / 2 * (ϕL₀(i) + ϕR₀(i))) >=
(ϕR₀(i) - ϕL₀(i))^2 / 6,
3 * ϕ[i] - 2 * ϕR₀(i),
ϕL₀(i)
))
end
function ϕR(i)
ifelse((ϕR₀(i) - ϕ[i]) * (ϕ[i] - ϕL₀(i)) <= 0,
ϕ[i],
ifelse(
-(ϕR₀(i) - ϕL₀(i))^2 / 6 >
(ϕR₀(i) - ϕL₀(i)) * (ϕ[i] - 1 / 2 * (ϕR₀(i) + ϕL₀(i))),
3 * ϕ[i] - 2 * ϕL₀(i),
ϕR₀(i)
))
end
## Compute flux
courant(i) = U[i] * Δt / Δz
Δϕ(i) = ϕR(i) - ϕL(i)
ϕ₆(i) = 6 * (ϕ[i] - 1 / 2 * (ϕL(i) + ϕR(i)))
function FLUX(i)
ifelse(U[i] >= 0,
courant(i) * (ϕR(i + 2) -
1 / 2 * courant(i) *
(Δϕ(i + 2) - (1 - 2 / 3 * courant(i)) * ϕ₆(i + 2))),
courant(i) * (ϕL(i + 3) -
1 / 2 * courant(i) *
(Δϕ(i + 3) - (1 + 2 / 3 * courant(i)) * ϕ₆(i + 3)))
)
end
ϕ2(i) = (FLUX(i - 3) - FLUX(i - 2)) / Δt
ϕ2(4)
end
" Return the left and right stencil size of the PPM stencil. "
stencil_size(s::typeof(ppm_stencil)) = (3, 4)
"""
$(SIGNATURES)
First-order upwind advection in 1-D: https://en.wikipedia.org/wiki/Upwind_scheme.
* ϕ is the scalar field at the current time step, it should be a vector of length 3 (1 cell on the left, the central cell, and 1 cell on the right).
* U is the velocity at both edges of the central grid cell, it should be a vector of length 2.
* Δt is the length of the time step.
* Δz is the grid spacing.
The output will be time derivative of the central index (i.e. index 2)
of the ϕ vector (i.e. dϕ/dt).
`Δt` and `p` are not used, but are function arguments for consistency with other operators.
"""
function upwind1_stencil(ϕ, U, Δt, Δz; p = nothing)
sz = sign(Δz) # Handle negative grid spacing
ul₊ = sz*max(sz*U[1], zero(eltype(U)))
ul₋ = sz*min(sz*U[1], zero(eltype(U)))
ur₊ = sz*max(sz*U[2], zero(eltype(U)))
ur₋ = sz*min(sz*U[2], zero(eltype(U)))
flux₊ = (ϕ[1]*ul₊ - ϕ[2]*ur₊) / Δz
flux₋ = (ϕ[2]*ul₋ - ϕ[3]*ur₋) / Δz
flux₊ + flux₋
end
" Return the left and right stencil size of the first-order upwind stencil. "
stencil_size(s::typeof(upwind1_stencil)) = (1, 1)
"""
$(SIGNATURES)
Second-order upwind advection in 1-D, otherwise known as linear-upwind differencing (LUD): https://en.wikipedia.org/wiki/Upwind_scheme.
* ϕ is the scalar field at the current time step, it should be a vector of length 5 (2 cells on the left, the central cell, and 2 cells on the right).
* U is the velocity at both edges of the central grid cell, it should be a vector of length 2.
* Δt is the length of the time step.
* Δz is the grid spacing.
The output will be time derivative of the central index (i.e. index 3)
of the ϕ vector (i.e. dϕ/dt).
(Δt is not used, but is a function argument for consistency with other operators.)
"""
function upwind2_stencil(ϕ, U, Δt, Δz; kwargs...)
u₊ = max(U[1], zero(eltype(U)))
u₋ = min(U[2], zero(eltype(U)))
ϕ₋ = (3ϕ[3] - 4ϕ[2] + ϕ[1]) / (2Δz)
ϕ₊ = (-ϕ[4] + 4ϕ[3] - 3ϕ[2]) / (2Δz)
-(u₊ * ϕ₋ + u₋ * ϕ₊)
end
" Return the left and right stencil size of the second-order upwind stencil. "
stencil_size(s::typeof(upwind2_stencil)) = (2, 2)
| EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | code | 996 | export ZeroGradBC
"An array with external indexing implemented for boundary conditions."
abstract type BCArray{T, N} <: AbstractArray{T, N} end
"""
$(SIGNATURES)
An array with zero gradient boundary conditions.
"""
struct ZeroGradBCArray{P, T, N} <: BCArray{T, N}
parent::P
function ZeroGradBCArray(x::AbstractArray{T, N}) where {T, N}
return new{typeof(x), T, N}(x)
end
end
zerogradbcindex(i::Int, N::Int) = clamp(i, 1, N)
zerogradbcindex(i::UnitRange, N::Int) = zerogradbcindex.(i, N)
Base.size(A::ZeroGradBCArray) = size(A.parent)
Base.checkbounds(::Type{Bool}, ::ZeroGradBCArray, i...) = true
function Base.getindex(A::ZeroGradBCArray{P, T, N},
ind::Vararg{Union{Int, UnitRange}, N}) where {P, T, N}
v = A.parent
i = map(zerogradbcindex, ind, size(A))
@boundscheck checkbounds(v, i...)
@inbounds ret = v[i...]
ret
end
"""
$(SIGNATURES)
Zero gradient boundary conditions.
"""
struct ZeroGradBC end
(bc::ZeroGradBC)(x) = ZeroGradBCArray(x)
| EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | code | 3359 | export Puff
struct PuffCoupler
sys::Any
end
"""
$(TYPEDSIGNATURES)
Create a Lagrangian transport model which advects a "puff" or particle of matter
within a fluid velocity field.
Model boundaries are set by the DomainInfo argument.
The model sets boundaries at the ground and model bottom and top,
preventing the puff from crossing those boundaries. If the
puff reaches one of the horizontal boundaries, the simulation is stopped.
"""
function Puff(di::DomainInfo; name = :puff)
pv = EarthSciMLBase.pvars(di)
coords = []
for p in pv
n = Symbol(p)
v = EarthSciMLBase.add_metadata(only(@variables $n(t) = getdefault(p)), p)
push!(coords, v)
end
@assert length(coords)==3 "DomainInfo must have 3 coordinates for puff model but currently has $(length(coords)): $coords"
# Get transforms for e.g. longitude to meters.
trans = EarthSciMLBase.partialderivative_transforms(di)
for (it, tr) in enumerate(trans) # Make sure using correct coords.
for (ip, p) in enumerate(pv)
vars = get_variables(tr)
iloc = findfirst(isequal(p), vars)
if !isnothing(iloc)
trans[it] = substitute(trans[it], vars[iloc] => coords[ip])
end
end
end
# Create placeholder velocity variables.
vs = []
for i in eachindex(coords)
v_sym = Symbol("v_$(Symbol(pv[i]))")
vu = get_unit(coords[i]) / get_unit(trans[i]) / get_unit(t)
v = only(@parameters $(v_sym)=0 [unit = vu description = "$(Symbol(pv[i])) speed"])
push!(vs, v)
end
eqs = D.(coords) .~ vs .* trans
grd = EarthSciMLBase.grid(di, [1, 1, 1])
lev_idx = only(findall(v -> string(Symbol(v)) in ["lev(t)", "z(t)"], coords))
lon_idx = only(findall(v -> string(Symbol(v)) in ["lon(t)", "x(t)"], coords))
lat_idx = only(findall(v -> string(Symbol(v)) in ["lat(t)", "y(t)"], coords))
# Boundary condition at the ground and model top.
uc = get_unit(coords[lev_idx])
@constants(
offset=0.05, [unit = uc, description="Offset for boundary conditions"],
glo=grd[lev_idx][begin], [unit=uc, description="lower bound"],
ghi=grd[lev_idx][end], [unit=uc, description="upper bound"],
v_zero=0, [unit = get_unit(eqs[lev_idx].rhs)],
)
@variables v_vertical(t) [unit = get_unit(eqs[lev_idx].rhs)]
push!(eqs, v_vertical ~ eqs[lev_idx].rhs)
eqs[lev_idx] = let
eq = eqs[lev_idx]
c = coords[lev_idx]
eq.lhs ~ ifelse(c - offset < glo, max(v_zero, v_vertical),
ifelse(c + offset > ghi, min(v_zero, v_vertical), v_vertical))
end
lower_bound = coords[lev_idx] ~ grd[lev_idx][begin]
upper_bound = coords[lev_idx] ~ grd[lev_idx][end]
vertical_boundary = [lower_bound, upper_bound]
# Stop simulation if we reach the lateral boundaries.
affect!(integrator, u, p, ctx) = terminate!(integrator)
wb = coords[lon_idx] ~ grd[lon_idx][begin]
eb = coords[lon_idx] ~ grd[lon_idx][end]
sb = coords[lat_idx] ~ grd[lat_idx][begin]
nb = coords[lat_idx] ~ grd[lat_idx][end]
lateral_boundary = [wb, eb, sb, nb] => (affect!, [], [], [], nothing)
ODESystem(eqs, EarthSciMLBase.ivar(di); name = name,
metadata = Dict(:coupletype => PuffCoupler),
continuous_events = [vertical_boundary, lateral_boundary])
end
| EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | code | 2672 | using EnvironmentalTransport
using EnvironmentalTransport: get_vf, get_Δ
using Test
using EarthSciMLBase, EarthSciData
using ModelingToolkit, DomainSets, OrdinaryDiffEq
using ModelingToolkit: t, D
using Distributions, LinearAlgebra
using DynamicQuantities
using Dates
@parameters(
lon=0.0, [unit=u"rad"],
lat=0.0, [unit=u"rad"],
lev=1.0,
)
starttime = datetime2unix(DateTime(2022, 5, 1))
endtime = datetime2unix(DateTime(2022, 5, 1, 1, 0, 5))
geosfp, geosfp_updater = GEOSFP("4x5"; dtype = Float64,
coord_defaults = Dict(:lon => 0.0, :lat => 0.0, :lev => 1.0))
domain = DomainInfo(
[partialderivatives_δxyδlonlat,
partialderivatives_δPδlev_geosfp(geosfp)],
constIC(16.0, t ∈ Interval(starttime, endtime)),
constBC(16.0,
lon ∈ Interval(deg2rad(-130.0), deg2rad(-60.0)),
lat ∈ Interval(deg2rad(9.75), deg2rad(60.0)),
lev ∈ Interval(1, 3)))
function emissions(μ_lon, μ_lat, σ)
@variables c(t) = 0.0 [unit=u"kg"]
@constants v_emis = 50.0 [unit=u"kg/s"]
@constants t_unit = 1.0 [unit=u"s"] # Needed so that arguments to `pdf` are unitless.
dist = MvNormal([starttime, μ_lon, μ_lat, 1], Diagonal(map(abs2, [3600.0, σ, σ, 1])))
ODESystem([D(c) ~ pdf(dist, [t/t_unit, lon, lat, lev]) * v_emis],
t, name = :Test₊emissions)
end
emis = emissions(deg2rad(-122.6), deg2rad(45.5), 0.1)
csys = couple(emis, domain, geosfp, geosfp_updater)
sim = Simulator(csys, [deg2rad(4), deg2rad(4), 1])
st = SimulatorStrangThreads(Tsit5(), SSPRK22(), 1.0)
sol = run!(sim, st)
@test 310 < norm(sol.u[end]) < 330
op = AdvectionOperator(100.0, l94_stencil, ZeroGradBC())
@test isnothing(op.vardict) # Before coupling, there shouldn't be anything here.
csys = couple(csys, op)
@test !isnothing(op.vardict) # after coupling, there should be something here.
sol = run!(sim, st)
# With advection, the norm should be lower because the pollution is more spread out.
@test 310 < norm(sol.u[end]) < 350
@testset "get_vf lon" begin
f = sim.obs_fs[sim.obs_fs_idx[op.vardict["lon"]]]
@test get_vf(sim, "lon", f)(2, 3, 1, starttime) ≈ -6.816295428727573
end
@testset "get_vf lat" begin
f = sim.obs_fs[sim.obs_fs_idx[op.vardict["lat"]]]
@test get_vf(sim, "lat", f)(3, 2, 1, starttime) ≈ -5.443038969820774
end
@testset "get_vf lev" begin
f = sim.obs_fs[sim.obs_fs_idx[op.vardict["lev"]]]
@test get_vf(sim, "lev", f)(3, 1, 2, starttime) ≈ -0.019995461793337128
end
@testset "get_Δ" begin
@test get_Δ(sim, "lat")(2, 3, 1, starttime) ≈ 445280.0
@test get_Δ(sim, "lon")(3, 2, 1, starttime) ≈ 432517.0383085161
@test get_Δ(sim, "lev")(3, 1, 2, starttime) ≈ -1516.7789198950632
end
| EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | code | 5518 | using Main.EnvironmentalTransport
using Test
c = [0.0, 1, 2, 3, 4, 5]
v = [10.0, 8, 6, 4, 2, 0, 1]
Δt = 0.05
Δz = 0.5
@testset "l94 1" begin
c2 = [c[1], c[1], c..., c[end], c[end]]
result = [l94_stencil(c2[(i - 2):(i + 2)], v[(i - 2):(i - 1)], Δt, Δz) for i in 3:8]
@test max.((0,), c .+ result .* Δt) ≈ [0.0, 0.28, 1.8, 3.24, 4.68, 4.5]
end
@testset "ppm 1" begin
c2 = [c[1], c[1], c[1], c..., c[end], c[end], c[end], c[end]]
result = [ppm_stencil(c2[(i - 3):(i + 4)], v[(i - 3):(i - 2)], Δt, Δz) for i in 4:9]
@test c .+ result .* Δt ≈ [0.0, 0.3999999999999999, 1.8, 3.2, 4.6, 4.5]
end
@testset "upwind1 1" begin
c2 = [c[1], c..., c[end]]
result = [upwind1_stencil(c2[(i - 1):(i + 1)], v[(i - 1):i], Δt, Δz) for i in 2:7]
@test c .+ result .* Δt ≈ [0.0, 0.3999999999999999, 1.8, 3.2, 4.6, 4.5]
end
@testset "upwind2 1" begin
c2 = [c[1], c[1], c..., c[end], c[end]]
result = [upwind2_stencil(c2[(i - 2):(i + 2)], v[(i - 2):(i - 1)], Δt, Δz) for i in 3:8]
@test max.((0,), c .+ result .* Δt) ≈ [0.0, 0.0, 1.4, 2.6, 3.8, 5.0]
end
c = [6.0, 6, 5, 5, 6, 6]
v = [2.0, 2, 2, 2, 2, 2, 2]
@testset "l94 2" begin
c2 = [c[1], c[1], c..., c[end], c[end]]
result = [l94_stencil(c2[(i - 2):(i + 2)], v[(i - 2):(i - 1)], Δt, Δz) for i in 3:8]
@test max.((0,), c .+ result .* Δt) ≈ [6.0, 6.0, 5.2, 5.0, 5.8, 6.0]
end
@testset "ppm 2" begin
c2 = [c[1], c[1], c[1], c..., c[end], c[end], c[end], c[end]]
result = [ppm_stencil(c2[(i - 3):(i + 4)], v[(i - 3):(i - 2)], Δt, Δz) for i in 4:9]
@test c .+ result .* Δt ≈ [6.0, 6.0, 5.2, 5.0, 5.8, 6.0]
end
@testset "upwind1 2" begin
c2 = [c[1], c..., c[end]]
result = [upwind1_stencil(c2[(i - 1):(i + 1)], v[(i - 1):i], Δt, Δz) for i in 2:7]
@test c .+ result .* Δt ≈ [6.0, 6.0, 5.2, 5.0, 5.8, 6.0]
end
@testset "upwind2 2" begin
c2 = [c[1], c[1], c..., c[end], c[end]]
result = [upwind2_stencil(c2[(i - 2):(i + 2)], v[(i - 2):(i - 1)], Δt, Δz) for i in 3:8]
@test max.((0,), c .+ result .* Δt) ≈ [6.0, 6.0, 5.3, 4.9, 5.7, 6.1]
end
@testset "Constant Field Preservation" begin
u0 = ones(10)
v = 1.0
Δt = 0.1
Δz = 0.1
@testset "Constant wind" begin
for stencil in [upwind1_stencil, upwind2_stencil, l94_stencil, ppm_stencil]
@testset "$(nameof(stencil))" begin
lpad, rpad = EnvironmentalTransport.stencil_size(stencil)
dudt = [stencil(u0[(i - lpad):(i + rpad)], [v, v], Δt, Δz)
for i in (1 + lpad):(10 - rpad)]
@test dudt ≈ zeros(10 - lpad - rpad)
end
end
end
end
@testset "Known solution" begin
for (dir, u0) in [("up", collect(1.0:10.0)), ("down", collect(10.0:-1:1))]
@testset "$dir" begin
v = 1.0
Δt = 1.0
# For vertical grid, increasing altitude mean decreasing pressure, so Δz is negative.
for (Δz, zdir) in [(1.0, "pos Δz"), (-1.0, "neg Δz")]
@testset "$zdir" begin
uu0 = u0 .* sign(Δz)
for stencil in [
upwind1_stencil, upwind2_stencil, l94_stencil, ppm_stencil]
@testset "$(nameof(stencil))" begin
lpad, rpad = EnvironmentalTransport.stencil_size(stencil)
dudt = [stencil(uu0[(i - lpad):(i + rpad)], [v, v], Δt, Δz)
for i in (1 + lpad):(10 - rpad)]
if dir == "up"
@test dudt ≈ zeros(10 - lpad - rpad) .- 1
else
@test dudt ≈ zeros(10 - lpad - rpad) .+ 1
end
end
end
end
end
end
end
end
@testset "Mass Conservation" begin
u0_opts = [("up", 1.0:10.0), ("down", 10.0:-1:1), ("rand", rand(10))]
for stencil in [upwind1_stencil, upwind2_stencil, l94_stencil, ppm_stencil]
@testset "$(nameof(stencil))" begin
lpad, rpad = EnvironmentalTransport.stencil_size(stencil)
N = 10 + lpad * 2 + rpad * 2
v_opts = [("c", ones(N + 1)), ("up", 1.0:(N + 1)),
("down", (N + 1):-1:1.0), ("neg up", -(1.0:(N + 1))),
("neg down", -((N + 1):-1:1.0)), ("rand", rand(N + 1) .* 2 .- 1)]
Δz_opts = [("c", ones(N)), ("up", 1.0:N), ("down", N:-1:1.0), ("neg", -N:-1.0)]
for (d1, u0_in) in u0_opts
@testset "u0 $d1" begin
u0 = zeros(N)
u0[(1 + lpad * 2):(N - rpad * 2)] .= u0_in
for (d2, v) in v_opts
@testset "v $d2" begin
Δt = 1.0
for (d3, Δz) in Δz_opts
@testset "Δz $d3" begin
dudt = [stencil(
u0[(i - lpad):(i + rpad)],
v[i:(i + 1)], Δt,
Δz[i])
for i in (1 + lpad):(N - rpad)]
@test sum(dudt .* Δz[(1 + lpad):(N - rpad)])≈0.0 atol=1e-14
end
end
end
end
end
end
end
end
end
| EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | code | 1754 | using EnvironmentalTransport: advection_op
using EnvironmentalTransport
using Test
using LinearAlgebra
using SciMLOperators
using SciMLBase: NullParameters
c = zeros(3, 6, 6, 6)
c[2, :, 3, 4] = [0.0, 1, 2, 3, 4, 5]
c[2, 3, :, 4] = [0.0, 1, 2, 3, 4, 5]
c[2, 3, 4, :] = [0.0, 1, 2, 3, 4, 5]
const v = [10.0, 8, 6, 4, 2, 0, 1]
const Δt = 0.05
const Δz = 0.5
v_fs = ((i, j, k, t) -> v[i], (i, j, k, t) -> v[j], (i, j, k, t) -> v[k])
Δ_fs = ((i, j, k, t) -> Δz, (i, j, k, t) -> Δz, (i, j, k, t) -> Δz)
@testset "4d advection op" begin
adv_op = advection_op(c, upwind1_stencil, v_fs, Δ_fs, Δt, ZeroGradBC())
adv_op = cache_operator(adv_op, c)
result_oop = adv_op(c[:], NullParameters(), 0.0)
result_iip = similar(result_oop)
adv_op(result_iip, c[:], NullParameters(), 0.0)
for (s, result) in (("in-place", result_iip), ("out-of-place", result_oop))
@testset "$s" begin
result = reshape(result, size(c))
@test result[2, :, 3, 4] ≈ [0.0, -24.0, -16.0, -32.0, -36.0, -70.0]
@test result[2, 3, :, 4] ≈ [0.0, -24.0, -16.0, -16.0, -36.0, -70.0]
@test result[2, 3, 4, :] ≈ [0.0, -24.0, -28.0, -16.0, -36.0, -70.0]
@test all(result[1, :, :, :] .≈ 0.0)
@test all(result[3, :, :, :] .≈ 0.0)
end
end
end
mul_stencil(ϕ, U, Δt, Δz; p = 0.0) = p
EnvironmentalTransport.stencil_size(s::typeof(mul_stencil)) = (0, 0)
@testset "parameters" begin
adv_op = advection_op(c, mul_stencil, v_fs, Δ_fs, Δt, ZeroGradBC(), p = 0.0)
adv_op = cache_operator(adv_op, c)
result_oop = adv_op(c[:], 2.0, 0.0)
result_iip = similar(result_oop)
adv_op(result_iip, c[:], 2.0, 0.0)
@test all(result_iip .== 6.0)
@test all(result_oop .== 6.0)
end
| EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | code | 288 | using Main.EnvironmentalTransport
using Test
a = rand(3, 4)
x = ZeroGradBC()(a)
@test x[1:3,1:4] == a
@test all(x[-10:1,15:30] .== a[begin,end])
@test all(x[end:end+20,begin-3:begin] .== a[end,begin])
@test all((@view x[1:3,1:4]) .== a)
@test CartesianIndices((3, 4)) == eachindex(x)
| EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | code | 1386 | using Test
using Main.EnvironmentalTransport
using EarthSciMLBase
using EarthSciData
using ModelingToolkit
using ModelingToolkit: t
using DynamicQuantities
using DomainSets
using OrdinaryDiffEq
using Dates
starttime = DateTime(2022, 5, 1)
endtime = DateTime(2022, 5, 1, 3)
geosfp, geosfp_updater = GEOSFP("4x5"; dtype = Float64,
coord_defaults = Dict(:lon => deg2rad(-97), :lat => deg2rad(40), :lev => 1.0),
cache_size = 3)
EarthSciData.lazyload!(geosfp_updater, datetime2unix(starttime))
@parameters lon=deg2rad(-97) [unit = u"rad"]
@parameters lat=deg2rad(40) [unit = u"rad"]
@parameters lev = 3.0
di = DomainInfo(
[partialderivatives_δxyδlonlat,
partialderivatives_δPδlev_geosfp(geosfp)],
constIC(16.0, t ∈ Interval(starttime, endtime)),
constBC(16.0,
lon ∈ Interval(deg2rad(-115), deg2rad(-68.75)),
lat ∈ Interval(deg2rad(25), deg2rad(53.7)),
lev ∈ Interval(1, 15)),
dtype = Float64)
puff = Puff(di)
model = couple(puff, geosfp)
sys = convert(ODESystem, model)
sys, _ = EarthSciMLBase.prune_observed(sys)
@test length(equations(sys)) == 3
@test occursin("PS", string(equations(sys))) # Check that we're using the GEOSFP pressure data.
@test issetequal([Symbol("puff₊lon(t)"), Symbol("puff₊lat(t)"), Symbol("puff₊lev(t)")],
Symbol.(unknowns(sys)))
@test length(parameters(sys)) == 0
@test length(observed(sys)) == 9
| EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | code | 2004 | using Test
using Main.EnvironmentalTransport
using EarthSciMLBase
using ModelingToolkit
using ModelingToolkit: t
using OrdinaryDiffEq
using DynamicQuantities
using DomainSets
@parameters x=0 [unit = u"m"]
@parameters y=0 [unit = u"m"]
@parameters z=0 [unit = u"m"]
starttime = 0.0
endtime = 1.0
di = DomainInfo(
constIC(16.0, t ∈ Interval(starttime, endtime)),
constBC(16.0,
x ∈ Interval(-1.0, 1.0),
y ∈ Interval(-1.0, 1.0),
z ∈ Interval(-1.0, 1.0)
),
dtype = Float64)
puff = Puff(di)
puff = structural_simplify(puff)
@test length(equations(puff)) == 3
@test issetequal([Symbol("z(t)"), Symbol("x(t)"), Symbol("y(t)")], Symbol.(unknowns(puff)))
@test issetequal([:v_x, :v_y, :v_z], Symbol.(parameters(puff)))
prob = ODEProblem(puff, [], (starttime, endtime), [])
@testset "x terminate +" begin
p = MTKParameters(puff, [puff.v_x => 2.0])
prob2 = remake(prob, p = p)
sol = solve(prob2)
@test sol.t[end] ≈ 0.5
@test sol[puff.x][end] ≈ 1.0
end
@testset "x terminate -" begin
p = MTKParameters(puff, [puff.v_x => -2.0])
prob2 = remake(prob, p = p)
sol = solve(prob2)
@test sol.t[end] ≈ 0.5
@test sol[puff.x][end] ≈ -1.0
end
@testset "y terminate +" begin
p = MTKParameters(puff, [puff.v_y => 2.0])
prob2 = remake(prob, p = p)
sol = solve(prob2)
@test sol.t[end] ≈ 0.5
@test sol[puff.y][end] ≈ 1.0
end
@testset "y terminate -" begin
p = MTKParameters(puff, [puff.v_y => -2.0])
prob2 = remake(prob, p = p)
sol = solve(prob2)
@test sol.t[end] ≈ 0.5
@test sol[puff.y][end] ≈ -1.0
end
@testset "z bounded +" begin
p = MTKParameters(puff, [puff.v_z => 10.0])
prob2 = remake(prob, p = p)
sol = solve(prob2)
@test sol.t[end] ≈ 1
@test maximum(sol[puff.z]) ≈ 1.0
end
@testset "z bounded -" begin
p = MTKParameters(puff, [puff.v_z => -10.0])
prob2 = remake(prob, p = p)
sol = solve(prob2)
@test sol.t[end] ≈ 1
@test minimum(sol[puff.z]) ≈ -1.0
end
| EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | code | 561 | using EnvironmentalTransport
using Test, SafeTestsets
@testset "EnvironmentalTransport.jl" begin
@safetestset "Advection Stencils" begin include("advection_stencil_test.jl") end
@safetestset "Boundary Conditions" begin include("boundary_conditions_test.jl") end
@safetestset "Advection" begin include("advection_test.jl") end
@safetestset "Advection Simulator" begin include("advection_simulator_test.jl") end
@safetestset "Puff" begin include("puff_test.jl") end
@safetestset "Puff-GEOSFP" begin include("puff_geosfp_test.jl") end
end
| EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | docs | 1159 | # EnvironmentalTransport
[](https://EarthSciML.github.io/EnvironmentalTransport.jl/stable/)
[](https://EarthSciML.github.io/EnvironmentalTransport.jl/dev/)
[](https://github.com/EarthSciML/EnvironmentalTransport.jl/actions/workflows/CI.yml?query=branch%3Amain)
[](https://codecov.io/gh/EarthSciML/EnvironmentalTransport.jl)
[](https://github.com/invenia/BlueStyle)
[](https://github.com/SciML/ColPrac)
[](https://JuliaCI.github.io/NanosoldierReports/pkgeval_badges/E/EnvironmentalTransport.html)
| EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | docs | 5117 | # Numerical Advection Operator
We have two ways to represent phenomena that occur across space such as advection: through symbolically-defined partial differential equation systems, which will be covered elsewhere in
documentation, and through numerically-implemented algorithms.
This is an example of the latter. (Currently, symbolically defined PDEs are too slow to be
used in large-scale simulations.)
To demonstrate how it works, let's first set up our environment:
```@example adv
using EnvironmentalTransport
using EarthSciMLBase, EarthSciData
using ModelingToolkit, DomainSets, DifferentialEquations
using ModelingToolkit: t, D
using DynamicQuantities
using Distributions, LinearAlgebra
using Dates
using NCDatasets, Plots
nothing #hide
```
## Emissions
Next, let's set up an emissions scenario to advect.
We have some emissions centered around Portland, starting at the beginning of the simulation and then tapering off:
```@example adv
starttime = datetime2unix(DateTime(2022, 5, 1, 0, 0))
endtime = datetime2unix(DateTime(2022, 6, 1, 0, 0))
@parameters(
lon=-97.0, [unit=u"rad"],
lat=30.0, [unit=u"rad"],
lev=1.0,
)
function emissions(μ_lon, μ_lat, σ)
@variables c(t) = 0.0 [unit=u"kg"]
@constants v_emis = 50.0 [unit=u"kg/s"]
@constants t_unit = 1.0 [unit=u"s"] # Needed so that arguments to `pdf` are unitless.
dist = MvNormal([starttime, μ_lon, μ_lat, 1], Diagonal(map(abs2, [3600.0*24*3, σ, σ, 1])))
ODESystem([D(c) ~ pdf(dist, [t/t_unit, lon, lat, lev]) * v_emis],
t, name = :emissions)
end
emis = emissions(deg2rad(-122.6), deg2rad(45.5), deg2rad(1))
```
## Coupled System
Next, let's set up a spatial and temporal domain for our simulation, and
some input data from GEOS-FP to get wind fields for our advection.
We need to use `coord_defaults` in this case to get the GEOS-FP data to work correctly, but
it doesn't matter what the defaults are.
We also set up an [outputter](https://data.earthsci.dev/stable/api/#EarthSciData.NetCDFOutputter) to save the results of our simulation, and couple the components we've created so far into a
single system.
```@example adv
geosfp, geosfp_updater = GEOSFP("0.5x0.625_NA"; dtype = Float64,
coord_defaults = Dict(:lon => -97.0, :lat => 30.0, :lev => 1.0))
domain = DomainInfo(
[partialderivatives_δxyδlonlat,
partialderivatives_δPδlev_geosfp(geosfp)],
constIC(16.0, t ∈ Interval(starttime, endtime)),
constBC(16.0,
lon ∈ Interval(deg2rad(-129), deg2rad(-61)),
lat ∈ Interval(deg2rad(11), deg2rad(59)),
lev ∈ Interval(1, 30)),
dtype = Float64)
outfile = ("RUNNER_TEMP" ∈ keys(ENV) ? ENV["RUNNER_TEMP"] : tempname()) * "out.nc" # This is just a location to save the output.
output = NetCDFOutputter(outfile, 3600.0)
csys = couple(emis, domain, geosfp, geosfp_updater, output)
```
## Advection Operator
Next, we create an [`AdvectionOperator`](@ref) to perform advection.
We need to specify a time step (600 s in this case), as stencil algorithm to do the advection (current options are [`l94_stencil`](@ref) and [`ppm_stencil`](@ref)).
We also specify zero gradient boundary conditions.
Then, we couple the advection operator to the rest of the system.
!!! warning
The advection operator will automatically couple itself to available wind fields such as those from GEOS-FP, but the wind-field component (e.g.. `geosfp`) must already be present
in the coupled system for this to work correctly.
```@example adv
adv = AdvectionOperator(300.0, upwind1_stencil, ZeroGradBC())
csys = couple(csys, adv)
```
Now, we initialize a [`Simulator`](https://base.earthsci.dev/dev/simulator/) to run our demonstration.
We specify a horizontal resolution of 4 degrees and a vertical resolution of 1 level, and use the `Tsit5` time integrator for our emissions system of equations, and a time integration scheme for our advection operator (`SSPRK22` in this case).
Refer [here](https://docs.sciml.ai/DiffEqDocs/stable/solvers/ode_solve/) for the available time integrator choices.
We also choose a operator splitting interval of 600 seconds.
Then, we run the simulation.
```@example adv
sim = Simulator(csys, [deg2rad(1), deg2rad(1), 1])
st = SimulatorStrangThreads(Tsit5(), SSPRK22(), 300.0)
@time run!(sim, st, save_on=false, save_start=false, save_end=false,
initialize_save=false)
```
## Visualization
Finally, we can visualize the results of our simulation:
```@example adv
ds = NCDataset(outfile, "r")
imax = argmax(reshape(maximum(ds["emissions₊c"][:, :, :, :], dims=(1, 3, 4)), :))
anim = @animate for i ∈ 1:size(ds["emissions₊c"])[4]
plot(
heatmap(rad2deg.(sim.grid[1]), rad2deg.(sim.grid[2]),
ds["emissions₊c"][:, :, 1, i]', title="Ground-Level", xlabel="Longitude", ylabel="Latitude"),
heatmap(rad2deg.(sim.grid[1]), sim.grid[3], ds["emissions₊c"][:, imax, :, i]',
title="Vertical Cross-Section (lat=$(round(rad2deg(sim.grid[2][imax]), digits=1)))",
xlabel="Longitude", ylabel="Vertical Level"),
)
end
gif(anim, fps = 15)
```
```@setup adv
rm(outfile, force=true)
``` | EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | docs | 98 | # API Index
```@index
```
# API Documentation
```@autodocs
Modules = [EnvironmentalTransport]
``` | EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | docs | 300 | # Redirecting...
```@raw html
<html>
<head>
<meta http-equiv="refresh" content="0; url=https://transport.earthsci.dev/benchmarks/" />
</head>
<body>
<p>If you are not redirected automatically, follow this <a href="https://transport.earthsci.dev/benchmarks/">link</a>.</p>
</body>
</html>
``` | EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | docs | 2295 | ```@meta
CurrentModule = EnvironmentalTransport
```
# EnvironmentalTransport: Algorithms for Environmental Mass Transport
Documentation for [EnvironmentalTransport.jl](https://github.com/EarthSciML/EnvironmentalTransport.jl).
This package contains algorithms for simulating environmental mass transport, for use with the [EarthSciML](https://earthsci.dev) ecosystem.
## Installation
```julia
using Pkg
Pkg.add("EnvironmentalTransport")
```
## Feature Summary
This package contains types and functions designed to simplify the process of constructing and composing symbolically-defined Earth Science model components together.
## Feature List
* Numerical Advection
## Contributing
* Please refer to the
[SciML ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://github.com/SciML/ColPrac/blob/master/README.md)
for guidance on PRs, issues, and other matters relating to contributing.
## Reproducibility
```@raw html
<details><summary>The documentation of this EnvironmentalTransport package was built using these direct dependencies,</summary>
```
```@example
using Pkg # hide
Pkg.status() # hide
```
```@raw html
</details>
```
```@raw html
<details><summary>and using this machine and Julia version.</summary>
```
```@example
using InteractiveUtils # hide
versioninfo() # hide
```
```@raw html
</details>
```
```@raw html
<details><summary>A more complete overview of all dependencies and their versions is also provided.</summary>
```
```@example
using Pkg # hide
Pkg.status(;mode = PKGMODE_MANIFEST) # hide
```
```@raw html
</details>
```
```@raw html
You can also download the
<a href="
```
```@eval
using TOML
using Markdown
version = TOML.parse(read("../../Project.toml",String))["version"]
name = TOML.parse(read("../../Project.toml",String))["name"]
link = Markdown.MD("https://github.com/EarthSciML/"*name*".jl/tree/gh-pages/v"*version*"/assets/Manifest.toml")
```
```@raw html
">manifest</a> file and the
<a href="
```
```@eval
using TOML
using Markdown
version = TOML.parse(read("../../Project.toml",String))["version"]
name = TOML.parse(read("../../Project.toml",String))["name"]
link = Markdown.MD("https://github.com/EarthSciML/"*name*".jl/tree/gh-pages/v"*version*"/assets/Project.toml")
```
```@raw html
">project</a> file.
``` | EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.3.0 | 5fa226120074e3996d6922e39bf02df6e1cee1a0 | docs | 2981 | # Air Pollution "Puff" Model Example
```@example puff
using EarthSciMLBase, EarthSciData, EnvironmentalTransport
using ModelingToolkit
using ModelingToolkit: t
using DynamicQuantities
using DifferentialEquations
using Plots
using Dates
using DomainSets
firestart = DateTime(2021, 10, 1)
firelength = 4 * 3600 # Seconds
simulationlength = 1 # Days
firelon = deg2rad(-97)
firelat = deg2rad(40)
fireradius = 0.05 # Degrees
samplerate = 1800.0 # Seconds
samples_per_time = 10 # Samples per each emission time
fireheight = 15.0 # Vertical level (Allowing this to be automatically calculated is a work in progress).
emis_rate = 1.0 # kg/s, fire emission rate
geosfp, _ = GEOSFP("0.5x0.625_NA"; dtype = Float64,
coord_defaults = Dict(:lon => deg2rad(-97), :lat => deg2rad(40), :lev => 1.0),
cache_size=simulationlength*24÷3+2)
@parameters lon = firelon [unit=u"rad"]
@parameters lat = firelat [unit=u"rad"]
@parameters lev = fireheight
sim_end = firestart + Day(simulationlength)
domain = DomainInfo(
[partialderivatives_δxyδlonlat,
partialderivatives_δPδlev_geosfp(geosfp)],
constIC(16.0, t ∈ Interval(firestart, sim_end)),
constBC(16.0,
lon ∈ Interval(deg2rad(-115), deg2rad(-68.75)),
lat ∈ Interval(deg2rad(25), deg2rad(53.7)),
lev ∈ Interval(1, 72)),
dtype = Float64)
puff = Puff(domain)
model = couple(puff, geosfp)
sys = convert(ODESystem, model)
sys, _ = EarthSciMLBase.prune_observed(sys)
u0 = ModelingToolkit.get_defaults(sys)
tspan = (datetime2unix(firestart), datetime2unix(sim_end))
prob=ODEProblem(sys, u0, tspan)
sol = solve(prob, Tsit5()) # Solve once to make sure data is loaded.
function prob_func(prob, i, repeat)
r = rand() * fireradius
θ = rand() * 2π
u0 = [firelon + r * cos(θ), firelat + r * sin(θ), fireheight]
ts = (tspan[1] + floor(i / samples_per_time) * samplerate, tspan[2])
remake(prob, u0 = u0, tspan = ts)
end
eprob = EnsembleProblem(prob, prob_func = prob_func)
esol = solve(eprob, Tsit5(); trajectories=ceil(firelength/samplerate*samples_per_time))
vars = [sys.puff₊lon, sys.puff₊lat, sys.puff₊lev]
ranges = [(Inf, -Inf), (Inf, -Inf), (Inf, -Inf)]
for sol in esol
for (i, var) in enumerate(vars)
rng = (minimum(sol[var]), maximum(sol[var]))
ranges[i] = (min(ranges[i][1], rng[1]),
max(ranges[i][2], rng[2]))
end
end
anim = @animate for t in datetime2unix(firestart):samplerate:datetime2unix(sim_end)
p = plot(
xlim=rad2deg.(ranges[1]), ylim=rad2deg.(ranges[2]), zlim=ranges[3],
title = "Time: $(unix2datetime(t))",
xlabel = "Longitude (deg)", ylabel = "Latitude (deg)",
zlabel = "Vertical Level",
)
for sol in esol
if t < sol.t[1] || t > sol.t[end]
continue
end
scatter!(p,
[rad2deg(sol(t)[1])], [rad2deg(sol(t)[2])], [sol(t)[3]],
label = :none, markercolor=:black, markersize=1.5,
)
end
end
gif(anim, fps=15)
``` | EnvironmentalTransport | https://github.com/EarthSciML/EnvironmentalTransport.jl.git |
|
[
"MIT"
] | 0.1.3 | de91207d8a5c788c2c333c188ce1c5bd594ef0ce | code | 817 | using FractionalTransforms
using Documenter
DocMeta.setdocmeta!(FractionalTransforms, :DocTestSetup, :(using FractionalTransforms); recursive=true)
makedocs(;
modules=[FractionalTransforms],
authors="Qingyu Qu",
repo="https://github.com/SciFracX/FractionalTransforms.jl/blob/{commit}{path}#{line}",
sitename="FractionalTransforms.jl",
format=Documenter.HTML(;
prettyurls=get(ENV, "CI", "false") == "true",
canonical="https://SciFracX.github.io/FractionalTransforms.jl",
assets=String[],
),
pages=[
"Home" => "index.md",
"Fractional Fourier Transform" => "frft.md",
"Fractional Sine Transform" => "frst.md",
"Fractional Cosine Transform" => "frct.md"
],
)
deploydocs(;
repo="github.com/SciFracX/FractionalTransforms.jl",
)
| FractionalTransforms | https://github.com/SciFracX/FractionalTransforms.jl.git |
|
[
"MIT"
] | 0.1.3 | de91207d8a5c788c2c333c188ce1c5bd594ef0ce | code | 221 | module FractionalTransforms
using LinearAlgebra, DSP, ToeplitzMatrices, FFTW
include("frft.jl")
include("frst.jl")
include("frct.jl")
export frft
export freq_shear, time_shear, sinc_interp
export frst
export frct
end | FractionalTransforms | https://github.com/SciFracX/FractionalTransforms.jl.git |
|
[
"MIT"
] | 0.1.3 | de91207d8a5c788c2c333c188ce1c5bd594ef0ce | code | 1670 | """
frct(signal, α, p)
Computing the α order fractional cosine transform of the input **signal**.
# Example
```julia-repl
julia> frct([1,2,3], 0.5, 2)
3-element Vector{ComplexF64}:
1.707106781186547 + 0.9874368670764581im
1.5606601717798205 - 1.3964466094067267im
-0.3535533905932727 - 0.6982233047033652im
```
### References
```tex
@article{article,
author = {Pei, Soo-Chang and Yeh, Min-Hung},
year = {2001},
month = {07},
pages = {1198 - 1207},
title = {The discrete fractional cosine and sine transforms},
volume = {49},
journal = {Signal Processing, IEEE Transactions on},
doi = {10.1109/78.923302}
}
```
"""
function frct(signal, α, p)
N = length(signal)
@views signal = signal[:]
p = min(max(2, p), N-1)
E = dFRCT(N,p)
result = E *(exp.(-im*pi*α*collect(0:N-1)) .*(E' *signal))
return result
end
function dFRCT(N, p)
N1 = 2*N-2
d2 = [1, -2, 1]
d_p = 1
s = 0
st = zeros(1, N1)
for k = 1:floor(Int, p/2)
if typeof(d_p) <: Number
d_p = @. d2*d_p
else
d_p = conv(d2, d_p)
end
st[vcat(collect(N1-k+1:N1), collect(1:k+1))] = d_p
st[1] = 0
temp = vcat(union(1, collect(1:k-1)), union(1,collect(1:k-1)))
temp = temp[:] ./collect(1:2*k)
s = s.+(-1)^(k-1)*prod(temp)*2*st
end
H = Toeplitz(s[:], s[:]) + diagm(real.(fft(s[:])))
V = hcat(zeros(N-2), zeros(N-2, N-2)+I, zeros(N-2), reverse(zeros(N-2, N-2)+I, dims=1)) ./sqrt(2)
V = vcat([1 zeros(1, N1-1)], V, [zeros(1, N-1) 1 zeros(1, N-2)])
Ev = V*H*V'
_, ee = eigen(Ev)
E = reverse(ee, dims=2)
E[end,:] = E[end,:]/sqrt(2)
return E
end | FractionalTransforms | https://github.com/SciFracX/FractionalTransforms.jl.git |
|
[
"MIT"
] | 0.1.3 | de91207d8a5c788c2c333c188ce1c5bd594ef0ce | code | 2243 | """
frft(signal, α)
By entering the signal and the order, we can obtain the Fractional Fourier transform value.
# Example
```julia-repl
julia> frft([1,2,3], 0.5)
0.2184858049179108 - 0.48050718989735614im
3.1682634817489754 + 0.38301661364477946im
1.3827766087134183 - 1.1551815500981393im
```
"""
function frft(f, α)
f = f[:]
f=ComplexF64.(f)
N = length(f)
signal = zeros(ComplexF64, N)
shft = rem.(collect(Int64, 0:N-1).+floor(Int64, N/2), N).+1
sN = sqrt(N)
α = mod(α, 4)
if α==0
signal = f
return signal
end
if α==2
signal = reverse(f, dims=1)
return signal
end
if α==1
signal[shft,1] = fft(f[shft])/sN
return signal
end
if α==3
signal[shft, 1] = ifft(f[shft])*sN
return signal
end
if α>2.0
α = α-2
f = reverse(f, dims=1)
end
if α>1.5
α = α-1
f[shft] = fft(f[shft])/sN
end
if α<0.5
α = α+1
f[shft, 1] = ifft(f[shft])*sN
end
alpha = α*pi/2
tana2 = tan(alpha/2)
sina = sin(alpha)
f = [zeros(N-1,1) ; interp(f) ; zeros(N-1,1)]
chrp = exp.(-im*pi/N*tana2/4*collect(-2*N+2:2*N-2).^2)
f = chrp.*f
c = pi/N/sina/4;
signal = fconv(exp.(im*c*collect(-(4*N-4):4*N-4).^2), f)
signal = signal[4*N-3:8*N-7]*sqrt(c/pi)
signal = chrp.*signal
signal = @. exp(-im*(1-α)*pi/4)*signal[N:2:end-N+1]
return signal
end
function interp(x)
N = length(x)
y = zeros(ComplexF64, 2*N-1, 1)
y[1:2:2*N-1] = x
xint = fconv(y[1:2*N-1], sinc.(collect(-(2*N-3):(2*N-3))'/2))
xint = xint[2*N-2:end-2*N+3]
return xint
end
function fconv(x,y)
N = length([x[:]; y[:]])-1
P::Int = nextpow(2, N)
z = ifft(ourfft(x, P) .* ourfft(y, P))
z = z[1:N]
return z
end
function ourfft(x, n)
s=length(x)
x=x[:]
if s > n
return fft(x[1:n])
elseif s < n
return fft([x; zeros(n-s)])
else
return fft(x)
end
end
function ourifft(x, n)
s=length(x)
x=x[:]
if s > n
return ifft(x[1:n])
elseif s < n
return ifft([x; zeros(n-s)])
else
return ifft(x)
end
end | FractionalTransforms | https://github.com/SciFracX/FractionalTransforms.jl.git |
|
[
"MIT"
] | 0.1.3 | de91207d8a5c788c2c333c188ce1c5bd594ef0ce | code | 1553 | """
frst(signal, α, p)
Computing the α order fractional sine transform of the input **signal**.
# Example
```julia-repl
julia> frst([1,2,3], 0.5, 2)
3-element Vector{ComplexF64}:
1.707106781186548 + 1.207106781186547im
1.9999999999999998 - 1.7071067811865481im
-1.1213203435596437 - 1.2071067811865468im
```
### References
```tex
@article{article,
author = {Pei, Soo-Chang and Yeh, Min-Hung},
year = {2001},
month = {07},
pages = {1198 - 1207},
title = {The discrete fractional cosine and sine transforms},
volume = {49},
journal = {Signal Processing, IEEE Transactions on},
doi = {10.1109/78.923302}
}
```
"""
function frst(signal, α, p)
N = length(signal)
@views signal = signal[:]
p = min(max(2, p), N-1)
E = dFRST(N,p)
result = E *(exp.(-im*pi*α*collect(0:N-1)) .*(E' *signal))
return result
end
function dFRST(N, p)
N1 = 2*N+2
d2 = [1, -2, 1]
d_p = 1
s = 0
st = zeros(1, N1)
for k = 1:floor(Int, p/2)
if isa(d_p, Number)
d_p = @. d2*d_p
else
d_p = conv(d2, d_p)
end
st[vcat(collect(N1-k+1:N1), collect(1:k+1))] = d_p
st[1]=0
temp=vcat(union(1, collect(1:k-1)), union(2,collect(1:k-1)))
temp = temp[:] ./collect(1:2*k)
s=s.+(-1)^(k-1)*prod(temp)*2*st
end
H = Toeplitz(s[:], s[:]) +diagm(real.(fft(s[:])))
V = hcat(zeros(N), reverse(zeros(N, N)+I, dims=1), zeros(N), -(zeros(N, N)+I)) ./sqrt(2)
Od = V*H*V'
_, vo = eigen(Od)
return reverse(reverse(vo, dims=2), dims=1)
end | FractionalTransforms | https://github.com/SciFracX/FractionalTransforms.jl.git |
|
[
"MIT"
] | 0.1.3 | de91207d8a5c788c2c333c188ce1c5bd594ef0ce | code | 313 | using FractionalTransforms
using Test
@testset "Test FRCT" begin
@test isapprox(real(frct([1,2,3], 0.5, 2)), [1.707106781186547, 1.5606601717798205, -0.3535533905932727], atol=1e-5)
@test isapprox(imag(frct([1,2,3], 0.5, 2)), [0.9874368670764581, -1.3964466094067267, -0.6982233047033652], atol=1e-5)
end | FractionalTransforms | https://github.com/SciFracX/FractionalTransforms.jl.git |
|
[
"MIT"
] | 0.1.3 | de91207d8a5c788c2c333c188ce1c5bd594ef0ce | code | 1566 | using FractionalTransforms
using Test
@testset "Test FRFT" begin
@test isapprox(real(frft([1, 2, 3], 0.5)), [0.2184858049179108
3.1682634817489754
1.3827766087134183], atol=1e-5)
@test isapprox(imag(frft([1, 2, 3], 0.5)), [-0.48050718989735614
0.38301661364477946
-1.1551815500981393], atol=1e-5)
@test isapprox(real(frft([1, 2, 3, 4, 5], 0.3)), [0.6317225402281863
1.921850573794287
2.9473079123194106
4.7693889446948665
1.1215925410506982]; atol=1e-6)
@test isapprox(imag(frft([1, 2, 3, 4, 5], 0.3)), [-0.6482786115154652
-0.3257271769214941
0.9987425650468786
-0.951165035143666
-3.2965111836383025]; atol=1e-6)
@test isapprox(real(frft([1, 2, 3, 4, 5], 1)), [0.0
0.0
6.7082039324993685
0.0
0.0]; atol=1e-6)
end
@testset "Test FRFT auxillary functions" begin
@testset "Test Sinc Interpolation" begin
@test isapprox(real(sinc_interp([1, 2, 3], 3)), [1.0, 1.1577906803857638, 1.4472383504822042, 2.0, 2.6877283651812367, 3.1425747039042147, 3.0]; atol=1e-5)
@test isapprox(imag(sinc_interp([1, 2, 3], 3)), [-4.0696311822644287e-17, -2.4671622769447922e-17, -2.522336560207922e-16, -3.331855648569948e-17, 0.0, 1.9624582529744518e-17, 3.331855648569948e-17]; atol=1e-5)
end
@testset "Test Frequency Shear" begin
@test isapprox(real(freq_shear([1, 2, 3], 3)), [0.0707372016677029, 2.0, 0.2122116050031087]; atol=1e-5)
@test isapprox(imag(freq_shear([1, 2, 3], 3)), [0.9974949866040544, 0.0, 2.9924849598121632]; atol=1e-5)
end
end | FractionalTransforms | https://github.com/SciFracX/FractionalTransforms.jl.git |
|
[
"MIT"
] | 0.1.3 | de91207d8a5c788c2c333c188ce1c5bd594ef0ce | code | 295 | using FractionalTransforms
using Test
@testset "Test FRST" begin
@test isapprox(real(frst([1,2,3], 0.5, 2)), [1.707106781186548, 2, -1.1213203435596437], atol=1e-5)
@test isapprox(imag(frst([1,2,3], 0.5, 2)), [1.207106781186547, -1.7071067811865481, -1.2071067811865468], atol=1e-5)
end | FractionalTransforms | https://github.com/SciFracX/FractionalTransforms.jl.git |
|
[
"MIT"
] | 0.1.3 | de91207d8a5c788c2c333c188ce1c5bd594ef0ce | code | 153 | using FractionalTransforms
using Test
@testset "FractionalTransforms.jl" begin
include("frft.jl")
include("frst.jl")
include("frct.jl")
end
| FractionalTransforms | https://github.com/SciFracX/FractionalTransforms.jl.git |
|
[
"MIT"
] | 0.1.3 | de91207d8a5c788c2c333c188ce1c5bd594ef0ce | docs | 3407 | # FractionalTransforms.jl
<p align="center">
<img width="250px" src="https://raw.githubusercontent.com/SciFracX/FractionalTransforms.jl/master/docs/src/assets/logo.svg"/>
</p>
<p align="center">
<a href="https://github.com/SciFracX/FractionalTransforms.jl/actions?query=workflow%3ACI">
<img alt="building" src="https://github.com/SciFracX/FractionalTransforms.jl/workflows/CI/badge.svg">
</a>
<a href="https://codecov.io/gh/SciFracX/FractionalTransforms.jl">
<img alt="codecov" src="https://codecov.io/gh/SciFracX/FractionalTransforms.jl/branch/master/graph/badge.svg">
</a>
<a href="https://www.erikqqy.xyz/FRFT.jl/dev/">
<img src="https://img.shields.io/badge/docs-dev-blue.svg" alt="license">
</a>
<a href="https://github.com/SciFracX/FractionalTransforms.jl/blob/master/LICENSE">
<img src="https://img.shields.io/github/license/SciFracX/FractionalTransforms.jl?style=flat-square" alt="license">
</a>
</p>
<p align="center">
<a href="https://github.com/SciFracX/FractionalTransforms.jl/issues">
<img alt="GitHub issues" src="https://img.shields.io/github/issues/SciFracX/FractionalTransforms.jl?style=flat-square">
</a>
<a href="#">
<img alt="GitHub stars" src="https://img.shields.io/github/stars/SciFracX/FractionalTransforms.jl?style=flat-square">
</a>
<a href="https://github.com/SciFracX/FractionalTransforms.jl/network">
<img alt="GitHub forks" src="https://img.shields.io/github/forks/SciFracX/FractionalTransforms.jl?style=flat-square">
</a>
</p>
## Installation
If you have already installed Julia, you can install FractionalTransforms.jl in REPL using Julia package manager:
```julia
pkg> add FractionalTransforms
```
## Quick start
### Fractional Fourier Transform
Compute the Fractional Fourier transform by the following command:
```julia
frft(signal, order)
```
### Fractional Sine Transform
Compute the Fractional Sine transform by the following command:
```julia
julia> frst(signal, order, p)
```
### Fractional Cosine Transform
Compute the Fractional Cosine transform by the following command:
```julia
julia> frct(signal, order, p)
```
## Introduce
The custom Fourier Transform transforms the input signal from time domain to frequency domain, the Fractional Fourier transform, in a more generalized aspect, can transform the input signal to the fractional domain, reveal more properties and features of the signal.
## Plans
* Add more examples relating to signal processing, image processing etc.
* Cover more algorithms, including Fractional Hadamard Transform, Fractional Gabor Transform...
## Acknowledgements
I would like to express gratitude to
* *Jeffrey C. O'Neill* for what he has done in [DiscreteTFDs](http://tfd.sourceforge.net/).
* [Digital computation of the fractional Fourier transform](https://ieeexplore.ieee.org/document/536672) by [H.M. Ozaktas](https://ieeexplore.ieee.org/author/37294843100); [O. Arikan](https://ieeexplore.ieee.org/author/37350304900); [M.A. Kutay](https://ieeexplore.ieee.org/author/37350303800); [G. Bozdagt](https://ieeexplore.ieee.org/author/37086987430)
* [The discrete fractional cosine and sine transforms](http://dx.doi.org/10.1109/78.923302) by Pei, Soo-Chang and Yeh, Min-Hung.
* https://nalag.cs.kuleuven.be/research/software/FRFT/
> Please note that FRFT, FRST and FRCT are adapted from Matlab files, credits go to the original authors, bugs are my own. | FractionalTransforms | https://github.com/SciFracX/FractionalTransforms.jl.git |
|
[
"MIT"
] | 0.1.3 | de91207d8a5c788c2c333c188ce1c5bd594ef0ce | docs | 69 | # Fractional Cosine Transform
```@docs
FractionalTransforms.frct
``` | FractionalTransforms | https://github.com/SciFracX/FractionalTransforms.jl.git |
|
[
"MIT"
] | 0.1.3 | de91207d8a5c788c2c333c188ce1c5bd594ef0ce | docs | 2527 | # Fractional Fourier Transform
```@contents
Pages = ["frft.md"]
```
## Definition
While we are already familiar with the Fourier transform, which is defined by the integral of the product of the original function and a kernel function ``e^{-2\pi ix\xi}``:
```math
\hat{f}(\xi)=\mathcal{F}[f(x)]=\int_{-\infty}^\infty f(x)e^{-2\pi ix\xi}dx
```
The Fractional Fourier transform has the similar definition:
```math
\hat{f}(\xi)=\mathcal{F}^{\alpha}[f(x)]=\int_{-\infty}^\infty K(\xi,x)f(x)dx
```
```math
K(\xi,x)=A_\phi \exp[i\pi(x^2\cot\phi-2x\xi\csc\phi+\xi^2\cot\phi)]
```
```math
A_\phi=\frac{\exp(-i\pi\ \mathrm{sgn}(\sin\phi)/4+i\phi/2)}{|\sin\phi|^{1/2}}
```
```math
\phi=\frac{\alpha\pi}{2}
```
To compute the α-order Fractional Fourier transform of a signal, you can directly compute using **frft** function:
```@docs
FractionalTransforms.frft
```
## Features
The fractional Fourier transform algorithm is ``O(N\log N)`` time complexity algorithm.
## Relationship with FRST and FRCT
The FRFT can be computed by a smaller transform kernel with the help of the FRCT or the DFRST for the even or odd signal.
## Algorithm details
The numerical algorithm can be treated as as follows:
The definition of [Fractional Fourier Transform](https://en.wikipedia.org/wiki/Fractional_Fourier_transform) being interpolated using [Shannon interpolation](https://en.wikipedia.org/wiki/Whittaker%E2%80%93Shannon_interpolation_formula), after limit the range, the formulation can be recognized as the [convolution](https://en.wikipedia.org/wiki/Convolution) of the kernel function and the chirp-modulated function. Then we can use [FFT](https://en.wikipedia.org/wiki/Fast_Fourier_transform) to compute the convolution in $O(N\ \log N)$ time. Finally, process the above output using the [chirp modulation](https://en.wikipedia.org/wiki/Chirp).
## Acknowledge:
The Fractional Fourier Transform algorithm is taken from [Digital computation of the fractional Fourier transform](https://ieeexplore.ieee.org/document/536672) by [H.M. Ozaktas](https://ieeexplore.ieee.org/author/37294843100); [O. Arikan](https://ieeexplore.ieee.org/author/37350304900); [M.A. Kutay](https://ieeexplore.ieee.org/author/37350303800); [G. Bozdagt](https://ieeexplore.ieee.org/author/37086987430) and *Jeffrey C. O'Neill* for what he has done in [DiscreteTFDs](http://tfd.sourceforge.net/).
!!! tip "Value of α"
In FractionalTransforms.jl, α is processed as order, while in some books or papers, α would mean **angle**, which is **order\*π/2** | FractionalTransforms | https://github.com/SciFracX/FractionalTransforms.jl.git |
|
[
"MIT"
] | 0.1.3 | de91207d8a5c788c2c333c188ce1c5bd594ef0ce | docs | 139 | # Fractional Sine Transform
```@docs
FractionalTransforms.frst
```
## Property
* Unitarity
* Angle additivity
* Periodicity
* Symmetric
| FractionalTransforms | https://github.com/SciFracX/FractionalTransforms.jl.git |
|
[
"MIT"
] | 0.1.3 | de91207d8a5c788c2c333c188ce1c5bd594ef0ce | docs | 1446 | # FractionalTransforms.jl
Hello there👋!
FractionalTransforms.jl is a Julia package aiming at providing supports for computing fractional transforms.
## Installation
To install FractionalTransforms.jl, please open Julia REPL and press`]` key to use package mode and then type the following command:
```julia
Pkg> add FractionalTransforms
```
Or if you want to experience the latest version of FractionalTransforms.jl:
```julia
Pkg> add FractionalTransforms#master
```
## Plans
* Add more examples relating to signal processing, image processing etc.
* Cover more algorithms, including Fractional Hadamard Transform...
## Acknowledgements
I would like to express gratitude to
* *Jeffrey C. O'Neill* for what he has done in [DiscreteTFDs](http://tfd.sourceforge.net/).
* [Digital computation of the fractional Fourier transform](https://ieeexplore.ieee.org/document/536672) by [H.M. Ozaktas](https://ieeexplore.ieee.org/author/37294843100); [O. Arikan](https://ieeexplore.ieee.org/author/37350304900); [M.A. Kutay](https://ieeexplore.ieee.org/author/37350303800); [G. Bozdagt](https://ieeexplore.ieee.org/author/37086987430)
* [The discrete fractional cosine and sine transforms](http://dx.doi.org/10.1109/78.923302) by Pei, Soo-Chang and Yeh, Min-Hung.
**FractionalTransforms.jl** is built upon the hard work of many scientific researchers, I sincerely appreciate what they have done to help the development of science and technology. | FractionalTransforms | https://github.com/SciFracX/FractionalTransforms.jl.git |
|
[
"MIT"
] | 0.1.8 | 4a8a14f9a08181a1a0ccb1bafe7f0a233360a198 | code | 689 | using ConstraintModels
using Documenter
DocMeta.setdocmeta!(ConstraintModels, :DocTestSetup, :(using ConstraintModels); recursive=true)
makedocs(;
modules=[ConstraintModels],
authors="Jean-Francois Baffier",
repo="https://github.com/JuliaConstraints/ConstraintModels.jl/blob/{commit}{path}#{line}",
sitename="ConstraintModels.jl",
format=Documenter.HTML(;
prettyurls=get(ENV, "CI", "false") == "true",
canonical="https://JuliaConstraints.github.io/ConstraintModels.jl",
assets=String[],
),
pages=[
"Home" => "index.md",
],
)
deploydocs(;
repo="github.com/JuliaConstraints/ConstraintModels.jl",
devbranch="main",
)
| ConstraintModels | https://github.com/JuliaConstraints/ConstraintModels.jl.git |
|
[
"MIT"
] | 0.1.8 | 4a8a14f9a08181a1a0ccb1bafe7f0a233360a198 | code | 513 | module ConstraintModels
using CBLS
using Constraints
using Dictionaries
using JuMP
using LocalSearchSolvers
const LS = LocalSearchSolvers
import LocalSearchSolvers: Options
export chemical_equilibrium
export golomb
export qap
export magic_square
export mincut
export n_queens
export scheduling
export sudoku
include("assignment.jl")
include("chemical_equilibrium.jl")
include("cut.jl")
include("golomb.jl")
include("magic_square.jl")
include("n_queens.jl")
include("scheduling.jl")
include("sudoku.jl")
end
| ConstraintModels | https://github.com/JuliaConstraints/ConstraintModels.jl.git |
|
[
"MIT"
] | 0.1.8 | 4a8a14f9a08181a1a0ccb1bafe7f0a233360a198 | code | 400 | function qap(n, W, D, ::Val{:JuMP})
model = JuMP.Model(CBLS.Optimizer)
@variable(model, 1 ≤ X[1:n] ≤ n, Int)
@constraint(model, X in AllDifferent())
Σwd = p -> sum(sum(W[p[i], p[j]] * D[i, j] for j in 1:n) for i in 1:n)
@objective(model, Min, ScalarFunction(Σwd))
return model, X
end
qap(n, weigths, distances; modeler = :JuMP) = qap(n, weigths, distances, Val(modeler))
| ConstraintModels | https://github.com/JuliaConstraints/ConstraintModels.jl.git |
|
[
"MIT"
] | 0.1.8 | 4a8a14f9a08181a1a0ccb1bafe7f0a233360a198 | code | 686 | function chemical_equilibrium(A, B, C)
model = JuMP.Model(CBLS.Optimizer)
n = length(C)
m = length(B)
# Add the number of moles per compound (continuous interval)
@variable(model, 0 ≤ X[1:n] ≤ maximum(B))
# mass_conservation function
conserve = i -> (x ->
begin
δ = abs(sum(A[:, i] .* x) - B[i])
return δ ≤ 1.e-6 ? 0. : δ
end
)
for i in 1:m
@constraint(model, X in Error(conserve(i)))
end
# computes the total energy freed by the reaction
free_energy = x -> sum(j -> x[j] * (C[j] + log(x[j] / sum(x))))
@objective(model, Min, ScalarFunction(free_energy))
return model, X
end
| ConstraintModels | https://github.com/JuliaConstraints/ConstraintModels.jl.git |
|
[
"MIT"
] | 0.1.8 | 4a8a14f9a08181a1a0ccb1bafe7f0a233360a198 | code | 1151 | """
mincut(graph::AbstractMatrix{T}; source::Int, sink::Int, interdiction::Int = 0) where T <: Number
Compute the minimum cut of a graph.
# Arguments:
- `graph`: Any matrix <: AbstractMatrix that describes the capacities of the graph
- `source`: Id of the source node; must be set
- `sink`: Id of the sink node; must be set
- `interdiction`: indicates the number of forbidden links
"""
function mincut(graph; source, sink, interdiction=0)
m = model(; kind=:cut)
n = size(graph, 1)
d = domain(0:n)
separator = n + 1 # value that separate the two sides of the cut
# Add variables:
foreach(_ -> variable!(m, d), 0:n)
# Extract error function from usual_constraint
e1 = (x; param=nothing, dom_size=n + 1) -> error_f(
usual_constraints[:ordered])(x; param, dom_size
)
e2 = (x; param=nothing, dom_size=n + 1) -> error_f(
usual_constraints[:all_different])(x; param, dom_size
)
# Add constraint
constraint!(m, e1, [source, separator, sink])
constraint!(m, e2, 1:(n + 1))
# Add objective
objective!(m, (x...) -> o_mincut(graph, x...; interdiction))
return m
end
| ConstraintModels | https://github.com/JuliaConstraints/ConstraintModels.jl.git |
|
[
"MIT"
] | 0.1.8 | 4a8a14f9a08181a1a0ccb1bafe7f0a233360a198 | code | 1762 | function golomb(n, L, ::Val{:raw})
m = model(; kind=:golomb)
# Add variables
d = domain(0:L)
foreach(_ -> variable!(m, d), 1:n)
# Extract error function from usual_constraint
e1 = (x; param=nothing, dom_size=n) -> error_f(
usual_constraints[:all_different])(x; param, dom_size
)
e2 = (x; param=nothing, dom_size=n) -> error_f(
usual_constraints[:all_equal_param])(x; param, dom_size
)
e3 = (x; param=nothing, dom_size=n) -> error_f(
usual_constraints[:dist_different])(x; param, dom_size
)
# # Add constraints
constraint!(m, e1, 1:n)
constraint!(m, x -> e2(x; param=0), 1:1)
for i in 1:(n - 1), j in (i + 1):n, k in i:(n - 1), l in (k + 1):n
(i, j) < (k, l) || continue
constraint!(m, e3, [i, j, k, l])
end
# Add objective
objective!(m, o_dist_extrema)
return m
end
function golomb(n, L, ::Val{:JuMP})
m = JuMP.Model(CBLS.Optimizer)
@variable(m, 1 ≤ X[1:n] ≤ L, Int)
@constraint(m, X in AllDifferent()) # different marks
@constraint(m, X in Ordered()) # for output convenience, keep them ordered
# No two pairs have the same length
for i in 1:(n - 1), j in (i + 1):n, k in i:(n - 1), l in (k + 1):n
(i, j) < (k, l) || continue
@constraint(m, [X[i], X[j], X[k], X[l]] in DistDifferent())
end
# Add objective
@objective(m, Min, ScalarFunction(maximum))
return m, X
end
"""
golomb(n, L=n²)
Model the Golomb problem of `n` marks on the ruler `0:L`. The `modeler` argument accepts :raw, and :JuMP (default), which refer respectively to the solver internal model, the MathOptInterface model, and the JuMP model.
"""
golomb(n, L=n^2; modeler = :JuMP) = golomb(n, L, Val(modeler))
| ConstraintModels | https://github.com/JuliaConstraints/ConstraintModels.jl.git |
|
[
"MIT"
] | 0.1.8 | 4a8a14f9a08181a1a0ccb1bafe7f0a233360a198 | code | 845 | function magic_square(n, ::Val{:JuMP})
N = n^2
model = JuMP.Model(CBLS.Optimizer)
magic_constant = n * (N + 1) / 2
@variable(model, 1 ≤ X[1:n, 1:n] ≤ N, Int)
@constraint(model, vec(X) in AllDifferent())
for i in 1:n
@constraint(model, X[i,:] in SumEqualParam(magic_constant))
@constraint(model, X[:,i] in SumEqualParam(magic_constant))
end
@constraint(model, [X[i,i] for i in 1:n] in SumEqualParam(magic_constant))
@constraint(model, [X[i,n + 1 - i] for i in 1:n] in SumEqualParam(magic_constant))
return model, X
end
"""
magic_square(n; modeler = :JuMP)
Create a model for the magic square problem of order `n`. The `modeler` argument accepts :JuMP (default), which refer to the solver the JuMP model.
"""
magic_square(n; modeler = :JuMP) = magic_square(n, Val(modeler))
| ConstraintModels | https://github.com/JuliaConstraints/ConstraintModels.jl.git |
|
[
"MIT"
] | 0.1.8 | 4a8a14f9a08181a1a0ccb1bafe7f0a233360a198 | code | 692 | function n_queens(n, ::Val{:JuMP})
model = JuMP.Model(CBLS.Optimizer)
@variable(model, 1 ≤ Q[1:n] ≤ n, Int)
@constraint(model, Q in AllDifferent())
for i in 1:n, j in i + 1:n
@constraint(model, [Q[i],Q[j]] in Predicate(x -> x[1] != x[2]))
@constraint(model, [Q[i],Q[j]] in Predicate(x -> x[1] != x[2] + i - j))
@constraint(model, [Q[i],Q[j]] in Predicate(x -> x[1] != x[2] + j - i))
end
return model, Q
end
"""
n_queens(n; modeler = :JuMP)
Create a model for the n-queens problem with `n` queens. The `modeler` argument accepts :JuMP (default), which refer to the JuMP model.
"""
n_queens(n; modeler=:JuMP) = n_queens(n, Val(modeler))
| ConstraintModels | https://github.com/JuliaConstraints/ConstraintModels.jl.git |
|
[
"MIT"
] | 0.1.8 | 4a8a14f9a08181a1a0ccb1bafe7f0a233360a198 | code | 1739 | function scheduling(processing_times, due_dates, ::Val{:raw})
m = model(; kind = :scheduling)
n = length(processing_times) # number of jobs
max_time = sum(processing_times)
d = domain(0:max_time)
foreach(_ -> variable!(m, d), 1:n) # C (1:n)
foreach(_ -> variable!(m, d), 1:n) # S (n+1:2n)
minus_eq_param(param) = x -> abs(x[1] - x[2] - param)
less_than_param(param) = x -> abs(x[1] - param)
sequential_tasks(x) = x[1] ≤ x[2] || x[3] ≤ x[4]
for i in 1:n
constraint!(m, minus_eq_param(processing_times[i]), [i, i + n])
constraint!(m, less_than_param(due_dates[i]), [i])
end
for i in 1:n, j in 1:n
i == j && continue
constraint!(m, sequential_tasks, [i, j + n, j, i + n])
end
return m
end
function scheduling(processing_times, due_dates, ::Val{:JuMP})
model = JuMP.Model(CBLS.Optimizer)
n = length(processing_times) # number of jobs
max_time = sum(processing_times)
@variable(model, 0 ≤ C[1:n] ≤ max_time, Int) # completion
@variable(model, 0 ≤ S[1:n] ≤ max_time, Int) # start
for i in 1:n
@constraint(model, [C[i], S[i]] in MinusEqualParam(processing_times[i]))
@constraint(model, [C[i]] in LessThanParam(due_dates[i]))
end
for i in 1:n, j in 1:n
i == j && continue
@constraint(model, [C[i], S[j], C[j], S[i]] in SequentialTasks())
end
return model, C, S
end
"""
scheduling(processing_time, due_date; modeler=:JuMP)
Create a model for the n-queens problem with `n` queens. The `modeler` argument accepts :JuMP (default), which refer to the JuMP model.
"""
scheduling(processing_times, due_dates; modeler=:JuMP) = scheduling(processing_times, due_dates, Val(modeler))
| ConstraintModels | https://github.com/JuliaConstraints/ConstraintModels.jl.git |
|
[
"MIT"
] | 0.1.8 | 4a8a14f9a08181a1a0ccb1bafe7f0a233360a198 | code | 10524 | function sudoku(n, start, ::Val{:raw})
N = n^2
d = domain(1:N)
m = model(;kind=:sudoku)
# Add variables
if isnothing(start)
foreach(_ -> variable!(m, d), 1:(N^2))
else
foreach(((x, v),) -> variable!(m, 1 ≤ v ≤ N ? domain(v) : d), pairs(start))
end
e = (x; param=nothing, dom_size=N) -> error_f(
usual_constraints[:all_different])(x; param=param, dom_size=dom_size
)
# Add constraints: line, columns; blocks
foreach(i -> constraint!(m, e, (i * N + 1):((i + 1) * N)), 0:(N - 1))
foreach(i -> constraint!(m, e, [j * N + i for j in 0:(N - 1)]), 1:N)
for i in 0:(n - 1)
for j in 0:(n - 1)
vars = Vector{Int}()
for k in 1:n
for l in 0:(n - 1)
push!(vars, (j * n + l) * N + i * n + k)
end
end
constraint!(m, e, vars)
end
end
return m
end
function sudoku(n, start, ::Val{:MOI})
N = n^2
m = Optimizer()
MOI.add_variables(m, N^2)
# Add domain to variables
# TODO: make it accept starting solutions
foreach(i -> MOI.add_constraint(m, VI(i), DiscreteSet(1:N)), 1:N^2)
# Add constraints: line, columns; blocks
foreach(i -> MOI.add_constraint(m, VOV(map(VI, (i * N + 1):((i + 1) * N))),
MOIAllDifferent(N)), 0:(N - 1))
foreach(i -> MOI.add_constraint(m, VOV(map(VI, [j * N + i for j in 0:(N - 1)])),
MOIAllDifferent(N)), 1:N)
for i in 0:(n - 1)
for j in 0:(n - 1)
vars = Vector{Int}()
for k in 1:n
for l in 0:(n - 1)
push!(vars, (j * n + l) * N + i * n + k)
end
end
MOI.add_constraint(m, VOV(map(VI, vars)), MOIAllDifferent(N))
end
end
return m
end
function sudoku(n, start, ::Val{:JuMP})
N = n^2
m = JuMP.Model(CBLS.Optimizer)
@variable(m, 1 ≤ X[1:N, 1:N] ≤ N, Int)
if !isnothing(start)
for i in 1:N, j in 1:N
v_ij = start[i,j]
if 1 ≤ v_ij ≤ N
@constraint(m, X[i,j] == v_ij)
end
end
end
for i in 1:N
@constraint(m, X[i,:] in AllDifferent()) # rows
@constraint(m, X[:,i] in AllDifferent()) # columns
end
for i in 0:(n-1), j in 0:(n-1)
@constraint(m, vec(X[(i*n+1):(n*(i+1)), (j*n+1):(n*(j+1))]) in AllDifferent()) # blocks
end
return m, X
end
"""
sudoku(n; start= Dictionary{Int, Int}(), modeler = :JuMP)
Create a model for the sudoku problem of domain `1:n²` with optional starting values. The `modeler` argument accepts :raw, :MOI, and :JuMP (default), which refer respectively to the solver internal model, the MathOptInterface model, and the JuMP model.
```julia
# Construct a JuMP model `m` and its associated matrix `grid` for sudoku 9×9
m, grid = sudoku(3)
# Same with a starting instance
instance = [
9 3 0 0 0 0 0 4 0
0 0 0 0 4 2 0 9 0
8 0 0 1 9 6 7 0 0
0 0 0 4 7 0 0 0 0
0 2 0 0 0 0 0 6 0
0 0 0 0 2 3 0 0 0
0 0 8 5 3 1 0 0 2
0 9 0 2 8 0 0 0 0
0 7 0 0 0 0 0 5 3
]
m, grid = sudoku(3, start = instance)
# Run the solver
optimize!(m)
# Retrieve and display the values
solution = value.(grid)
display(solution, Val(:sudoku))
```
"""
sudoku(n; start=nothing, modeler = :JuMP) = sudoku(n, start, Val(modeler))
@doc raw"""
```julia
mutable struct SudokuInstance{T <: Integer} <: AbstractMatrix{T}
```
A `struct` for SudokuInstances, which is a subtype of `AbstractMatrix`.
---
```julia
SudokuInstance(A::AbstractMatrix{T})
SudokuInstance(::Type{T}, n::Int) # fill in blank sudoku of type T
SudokuInstance(n::Int) # fill in blank sudoku of type Int
SudokuInstance(::Type{T}) # fill in "standard" 9×9 sudoku of type T
SudokuInstance() # fill in "standard" 9×9 sudoku of type Int
SudokuInstance(n::Int, P::Pair{Tuple{Int, Int}, T}...) where {T <: Integer} # construct a sudoku given pairs of coordinates and values
SudokuInstance(P::Pair{Tuple{Int, Int}, T}...) # again, default to 9×9 sudoku, constructing given pairs
```
Constructor functions for the `SudokuInstance` `struct`.
"""
mutable struct SudokuInstance{T <: Integer} <: AbstractMatrix{T}
A::AbstractMatrix{T} where {T <: Integer}
function SudokuInstance(A::AbstractMatrix{T}) where {T <: Integer}
size(A, 1) == size(A, 2) || throw(error("Sodokus must be square; received matrix of size $(size(A, 1))×$(size(A, 2))."))
isequal(sqrt(size(A, 1)), isqrt(size(A, 1))) || throw(error("SudokuInstances must be able to split into equal boxes (e.g., a 9×9 SudokuInstance has three 3×3 squares). Size given is $(size(A, 1))×$(size(A, 2))."))
new{T}(A)
end
# # fill in blank sudoku if needed
# SudokuInstance(::Type{T}, n::Int) where {T <: Integer} = new{T}(fill(zero(T), n, n))
# SudokuInstance(n::Int) = new{Int}(SudokuInstance(Int, n))
# # Use "standard" 9×9 if no size provided
# SudokuInstance(::Type{T}) where {T <: Integer} = new{T}(SudokuInstance(T, 9))
# SudokuInstance() = new{Int}(SudokuInstance(9))
# # Construct a sudoku given coordinates and values
# function SudokuInstance(n::Int, P::Pair{Tuple{Int,Int},T}...) where {T <: Integer}
# A = zeros(T, n, n)
# for (i, v) in P
# A[i...] = v
# end
# new{T}(A)
# end
# # again, default to 9×9
# SudokuInstance(P::Pair{Tuple{Int,Int},T}...) where {T <: Integer} = new{T}(SudokuInstance(9, P...))
end
"""
SudokuInstance(X::Dictionary)
Construct a `SudokuInstance` with the values `X` of a solver as input.
"""
function SudokuInstance(X::Dictionary)
n = isqrt(length(X))
A = zeros(Int, n, n)
for (k,v) in enumerate(Base.Iterators.partition(X, n))
A[k,:] = v
end
return SudokuInstance(A)
end
SudokuInstance(X::Matrix{Float64}) = SudokuInstance(map(Int, X))
# # abstract array interface for SudokuInstance struct
"""
Base.size(S::SudokuInstance)
Extends `Base.size` for `SudokuInstance`.
"""
Base.size(S::SudokuInstance) = size(S.A)
"""
Base.getindex(S::SudokuInstance, i::Int)
Base.getindex(S::SudokuInstance, I::Vararg{Int,N}) where {N}
Extends `Base.getindex` for `SudokuInstance`.
"""
# Base.getindex(S::SudokuInstance, i::Int) = getindex(S.A, i)
Base.getindex(S::SudokuInstance, I::Vararg{Int,N}) where {N} = getindex(S.A, I...)
"""
Base.setindex!(S::SudokuInstance, v, i::Int)
Base.setindex!(S::SudokuInstance, v, I::Vararg{Int,N})
Extends `Base.setindex!` for `SudokuInstance`.
"""
# Base.setindex!(S::SudokuInstance, v, i) = setindex!(S.A, v, i)
# Base.setindex!(S::SudokuInstance, v, I::Vararg) = setindex!(S.A, v, I...)
const _rules = Dict(
:up_right_corner => '┐',
:up_left_corner => '┌',
:bottom_left_corner => '└',
:bottom_right_corner => '┘',
:up_intersection => '┬',
:left_intersection => '├',
:right_intersection => '┤',
:middle_intersection => '┼',
:bottom_intersection => '┴',
:column => '│',
:row => '─',
:blank => '⋅', # this is the character used for 0s in a SudokuInstance puzzle
)
"""
_format_val(a)
Format an integer `a` into a string for SudokuInstance.
"""
_format_val(a) = iszero(a) ? _rules[:blank] : string(a)
"""
_format_line_segment(r, col_pos, M)
Format line segment of a sudoku grid.
"""
function _format_line_segment(r, col_pos, M)
sep_length = length(r)
line = string()
for k in axes(r, 1)
n_spaces = 1
Δ = maximum((ndigits(i) for i in M[:, (col_pos * sep_length) + k])) - ndigits(r[k])
if Δ ≥ 0
n_spaces = Δ + 1
end
line *= repeat(' ', n_spaces) * _format_val(r[k])
end
return line * ' ' * _rules[:column]
end
"""
_format_line(r, M)
Format line of a sudoku grid.
"""
function _format_line(r, M)
sep_length = isqrt(length(r))
line = _rules[:column]
for i in 1:sep_length
abs_sep_pos = sep_length * i
line *= _format_line_segment(r[(abs_sep_pos - sep_length + 1):abs_sep_pos], i - 1, M)
end
return line
end
"""
_get_sep_line(s, pos_row, M)
Return a line separator.
"""
function _get_sep_line(s, pos_row, M)
sep_length = isqrt(s)
# deal with left-most edges
sep_line = string()
if pos_row == 1
sep_line *= _rules[:up_left_corner]
elseif mod(pos_row, sep_length) == 0
if pos_row == s
sep_line *= _rules[:bottom_left_corner]
else
sep_line *= _rules[:left_intersection]
end
end
# rest of row seperator; TODO: make less convoluted. ATM it works, but I think it can be simplified.
for pos_col in 1:s
sep_line *= repeat(_rules[:row], maximum((ndigits(i) for i in M[:, pos_col])) + 1)
if mod(pos_col, sep_length) == 0
sep_line *= _rules[:row]
if pos_col == s
if pos_row == 1
sep_line *= _rules[:up_right_corner]
elseif pos_row == s
sep_line *= _rules[:bottom_right_corner]
else
sep_line *= _rules[:right_intersection]
end
elseif pos_row == 1
sep_line *= _rules[:up_intersection]
elseif pos_row == s
sep_line *= _rules[:bottom_intersection]
else
sep_line *= _rules[:middle_intersection]
end
end
end
return sep_line
end
@doc raw"""
```julia
display(io::IO, S::SudokuInstance)
display(S::SudokuInstance) # default to stdout
```
Displays an ``n\times n`` SudokuInstance.
"""
function Base.display(io, S::SudokuInstance)
sep_length = isqrt(size(S, 1))
max_n_digits = maximum((ndigits(i) for i in S))
println(io, _get_sep_line(size(S, 1), 1, S))
for (i, r) in enumerate(eachrow(S))
println(io, _format_line(r, S))
if iszero(mod(i, sep_length))
println(io, _get_sep_line(size(S, 1), i, S))
end
end
return nothing
end
"""
Base.display(S::SudokuInstance)
Extends `Base.display` to `SudokuInstance`.
"""
Base.display(S::SudokuInstance) = display(stdout, S)
"""
Base.display(X::Dictionary)
Extends `Base.display` to a sudoku configuration.
"""
Base.display(X::Dictionary) = display(SudokuInstance(X))
"""
Base.display(X, Val(:sudoku))
Extends `Base.display` to a sudoku configuration.
"""
Base.display(X, ::Val{:sudoku}) = display(SudokuInstance(X))
| ConstraintModels | https://github.com/JuliaConstraints/ConstraintModels.jl.git |
|
[
"MIT"
] | 0.1.8 | 4a8a14f9a08181a1a0ccb1bafe7f0a233360a198 | code | 2491 | @testset "JuMP: constraints" begin
m = Model(CBLS.Optimizer)
err = _ -> 1.0
concept = _ -> true
@variable(m, X[1:10], DiscreteSet(1:4))
@constraint(m, X in Error(err))
@constraint(m, X in Predicate(concept))
@constraint(m, X in AllDifferent())
@constraint(m, X in AllEqual())
@constraint(m, X in AllEqualParam(2))
@constraint(m, X in AlwaysTrue())
@constraint(m, X[1:4] in DistDifferent())
@constraint(m, X[1:2] in Eq())
@constraint(m, X in Ordered())
end
@testset "JuMP: sudoku 9x9" begin
m, X = sudoku(3)
optimize!(m)
solution_ = value.(X)
display(solution_, Val(:sudoku))
end
@testset "JuMP: golomb(5)" begin
m, X = golomb(5)
optimize!(m)
@info "JuMP: golomb(5)" value.(X)
end
@testset "JuMP: magic_square(3)" begin
m, X = magic_square(3)
optimize!(m)
@info "JuMP: magic_square(3)" value.(X)
end
@testset "JuMP: n_queens(5)" begin
m, X = n_queens(5)
optimize!(m)
@info "JuMP: n_queens(5)" value.(X)
end
@testset "JuMP: qap(12)" begin
m, X = qap(12, qap_weights, qap_distances)
optimize!(m)
@info "JuMP: qap(12)" value.(X)
end
@testset "JuMP: basic opt" begin
model = Model(CBLS.Optimizer)
set_optimizer_attribute(model, "iteration", 100)
@test get_optimizer_attribute(model, "iteration") == 100
set_time_limit_sec(model, 5.0)
@test time_limit_sec(model) == 5.0
@variable(model, x in DiscreteSet(0:20))
@variable(model, y in DiscreteSet(0:20))
@constraint(model, [x,y] in Predicate(v -> 6v[1] + 8v[2] >= 100 ))
@constraint(model, [x,y] in Predicate(v -> 7v[1] + 12v[2] >= 120 ))
objFunc = v -> 12v[1] + 20v[2]
@objective(model, Min, ScalarFunction(objFunc))
optimize!(model)
@info "JuMP: basic opt" value(x) value(y) (12*value(x)+20*value(y))
end
@testset "JuMP: Chemical equilibrium" begin
m, X = chemical_equilibrium(atoms_compounds, elements_weights, standard_free_energy)
# set_optimizer_attribute(m, "iteration", 10000)
# set_time_limit_sec(m, 120.0)
optimize!(m)
@info "JuMP: $compounds_names ⟺ $mixture_name" value.(X)
end
@testset "JuMP: Scheduling" begin
model, completion_times, start_times = scheduling(processing_times, due_times)
optimize!(model)
@info solution_summary(model)
@info "JuMP: scheduling" value.(start_times) value.(completion_times) processing_times due_times
@info (value.(start_times)+processing_times == value.(completion_times))
end
| ConstraintModels | https://github.com/JuliaConstraints/ConstraintModels.jl.git |
|
[
"MIT"
] | 0.1.8 | 4a8a14f9a08181a1a0ccb1bafe7f0a233360a198 | code | 2289 | const MOI = MathOptInterface
const MOIT = MOI.Test
const MOIU = MOI.Utilities
const MOIB = MOI.Bridges
const VOV = MOI.VectorOfVariables
const VI = MOI.VariableIndex
const OPTIMIZER_CONSTRUCTOR = MOI.OptimizerWithAttributes(
CBLS.Optimizer, MOI.Silent() => true
)
const OPTIMIZER = MOI.instantiate(OPTIMIZER_CONSTRUCTOR)
@testset "LocalSearchSolvers" begin
@test MOI.get(OPTIMIZER, MOI.SolverName()) == "LocalSearchSolvers"
end
# @testset "supports_default_copy_to" begin
# @test MOIU.supports_default_copy_to(OPTIMIZER, false)
# # Use `@test !...` if names are not supported
# @test !MOIU.supports_default_copy_to(OPTIMIZER, true)
# end
const BRIDGED = MOI.instantiate(
OPTIMIZER_CONSTRUCTOR, with_bridge_type = Float64
)
const CONFIG = MOIT.TestConfig(atol=1e-6, rtol=1e-6)
# @testset "Unit" begin
# # Test all the functions included in dictionary `MOI.Test.unittests`,
# # except functions "number_threads" and "solve_qcp_edge_cases."
# MOIT.unittest(
# BRIDGED,
# CONFIG,
# ["number_threads", "solve_qcp_edge_cases"]
# )
# end
# @testset "Modification" begin
# MOIT.modificationtest(BRIDGED, CONFIG)
# end
# @testset "Continuous Linear" begin
# MOIT.contlineartest(BRIDGED, CONFIG)
# end
# @testset "Continuous Conic" begin
# MOIT.contlineartest(BRIDGED, CONFIG)
# end
# @testset "Integer Conic" begin
# MOIT.intconictest(BRIDGED, CONFIG)
# end
@testset "MOI: examples" begin
# m = LocalSearchSolvers.Optimizer()
# MOI.add_variables(m, 3)
# MOI.add_constraint(m, VI(1), LS.DiscreteSet([1,2,3]))
# MOI.add_constraint(m, VI(2), LS.DiscreteSet([1,2,3]))
# MOI.add_constraint(m, VI(3), LS.DiscreteSet([1,2,3]))
# MOI.add_constraint(m, VOV([VI(1),VI(2)]), LS.MOIPredicate(allunique))
# MOI.add_constraint(m, VOV([VI(2),VI(3)]), LS.MOIAllDifferent(2))
# MOI.set(m, MOI.ObjectiveFunction{LS.ScalarFunction}(), LS.ScalarFunction(sum, VI(1)))
# MOI.optimize!(m)
m1 = CBLS.Optimizer()
MOI.add_variable(m1)
MOI.add_constraint(m1, VI(1), CBLS.DiscreteSet([1,2,3]))
m2 = CBLS.Optimizer()
MOI.add_constrained_variable(m2, CBLS.DiscreteSet([1,2,3]))
# opt = CBLS.sudoku(3, modeler = :MOI)
# MOI.optimize!(opt)
# @info solution(opt)
end
| ConstraintModels | https://github.com/JuliaConstraints/ConstraintModels.jl.git |
|
[
"MIT"
] | 0.1.8 | 4a8a14f9a08181a1a0ccb1bafe7f0a233360a198 | code | 1799 |
sudoku_start = [
9 3 0 0 0 0 0 4 0
0 0 0 0 4 2 0 9 0
8 0 0 1 9 6 7 0 0
0 0 0 4 7 0 0 0 0
0 2 0 0 0 0 0 6 0
0 0 0 0 2 3 0 0 0
0 0 8 5 3 1 0 0 2
0 9 0 2 8 0 0 0 0
0 7 0 0 0 0 0 5 3
]
qap_weights = [
0 1 2 2 3 4 4 5 3 5 6 7
1 0 1 1 2 3 3 4 2 4 5 6
2 1 0 2 1 2 2 3 1 3 4 5
2 1 2 0 1 2 2 3 3 3 4 5
3 2 1 1 0 1 1 2 2 2 3 4
4 3 2 2 1 0 2 3 3 1 2 3
4 3 2 2 1 2 0 1 3 1 2 3
5 4 3 3 2 3 1 0 4 2 1 2
3 2 1 3 2 3 3 4 0 4 5 6
5 4 3 3 2 1 1 2 4 0 1 2
6 5 4 4 3 2 2 1 5 1 0 1
7 6 5 5 4 3 3 2 6 2 1 0
]
qap_distances = [
0 3 4 6 8 5 6 6 5 1 4 6
3 0 6 3 7 9 9 2 2 7 4 7
4 6 0 2 6 4 4 4 2 6 3 6
6 3 2 0 5 5 3 3 9 4 3 6
8 7 6 5 0 4 3 4 5 7 6 7
5 9 4 5 4 0 8 5 5 5 7 5
6 9 4 3 3 8 0 6 8 4 6 7
6 2 4 3 4 5 6 0 1 5 5 3
5 2 2 9 5 5 8 1 0 4 5 2
1 7 6 4 7 5 4 5 4 0 7 7
4 4 3 3 6 7 6 5 5 7 0 9
6 7 6 6 7 5 7 3 2 7 9 0
]
atoms_compounds = [
1 0 0
2 0 0
2 0 1
0 1 0
0 2 0
1 1 0
0 1 1
0 0 1
0 0 2
1 0 1
]
elements_weights = [2 1 1]
standard_free_energy = [
-6.0890
-17.164
-34.054
-5.9140
-24.721
-14.986
-24.100
-10.708
-26.662
-22.179
]
compounds_names = "x₁⋅H + x₂⋅H₂ + x₃⋅H₂O + x₄⋅N + x₅⋅N₂ + x₆⋅NH + x₇⋅NO + x₈⋅O + x₉⋅O₂ + x₁₀⋅OH"
mixture_name = "½⋅N₂H₄ + ½⋅O₂"
equation = compounds_names * " = " * mixture_name
processing_times = [3, 2, 1]
due_times = [5, 6, 8]
| ConstraintModels | https://github.com/JuliaConstraints/ConstraintModels.jl.git |
|
[
"MIT"
] | 0.1.8 | 4a8a14f9a08181a1a0ccb1bafe7f0a233360a198 | code | 3145 | @testset "Raw solver: internals" begin
models = [
sudoku(2; modeler = :raw),
]
for m in models
@info describe(m)
s = solver(m; options = Options(print_level =:verbose, iteration = Inf))
for x in keys(get_variables(s))
@test get_name(s, x) == "x$x"
for c in get_cons_from_var(s, x)
@test x ∈ get_vars_from_cons(s, c)
end
@test constriction(s, x) == 3
@test draw(s, x) ∈ get_domain(s, x)
end
for c in keys(get_constraints(s))
@test length_cons(s, c) == 4
end
for x in keys(get_variables(s))
add_var_to_cons!(s, 3, x)
delete_var_from_cons!(s, 3, x)
@test length_var(s, x) == 4
end
for c in keys(get_constraints(s))
add_var_to_cons!(s, c, 17)
@test length_cons(s, c) == 5
@test 17 ∈ get_constraint(s, c)
delete_var_from_cons!(s, c, 17)
@test length_cons(s, c) == 4
end
solve!(s)
solution(s)
# TODO: temp patch for coverage, make it nice
for x in keys(LocalSearchSolvers.tabu_list(s))
LocalSearchSolvers.tabu_value(s, x)
end
# LocalSearchSolvers._values!(s, Dictionary{Int,Int}())
end
end
@testset "Raw solver: sudoku" begin
sudoku_instance = collect(Iterators.flatten(sudoku_start))
s = solver(
sudoku(3; start = sudoku_instance, modeler = :raw);
options = Options(print_level = :minimal, iteration = 10000)
)
display(Dictionary(1:length(sudoku_instance), sudoku_instance))
solve!(s)
display(solution(s))
end
@testset "Raw solver: golomb" begin
s = solver(golomb(5, modeler = :raw); options = Options(print_level = :minimal, iteration = 1000))
solve!(s)
@info "Results golomb!"
@info "Values: $(get_values(s))"
@info "Sol (val): $(best_value(s))"
@info "Sol (vals): $(!isnothing(best_value(s)) ? best_values(s) : nothing)"
end
@testset "Raw solver: mincut" begin
graph = zeros(5, 5)
graph[1,2] = 1.0
graph[1,3] = 2.0
graph[1,4] = 3.0
graph[2,5] = 1.0
graph[3,5] = 2.0
graph[4,5] = 3.0
s = solver(mincut(graph, source=1, sink=5), options = Options(print_level = :minimal))
solve!(s)
@info "Results mincut!"
@info "Values: $(get_values(s))"
@info "Sol (val): $(best_value(s))"
@info "Sol (vals): $(!isnothing(best_value(s)) ? best_values(s) : nothing)"
s = solver(mincut(graph, source=1, sink=5, interdiction=1), options = Options(print_level = :minimal))
solve!(s)
@info "Results 1-mincut!"
@info "Values: $(get_values(s))"
@info "Sol (val): $(best_value(s))"
@info "Sol (vals): $(!isnothing(best_value(s)) ? best_values(s) : nothing)"
s = solver(mincut(graph, source=1, sink=5, interdiction=2), options = Options(print_level = :minimal))
solve!(s)
@info "Results 2-mincut!"
@info "Values: $(get_values(s))"
@info "Sol (val): $(best_value(s))"
@info "Sol (vals): $(!isnothing(best_value(s)) ? best_values(s) : nothing)"
end
| ConstraintModels | https://github.com/JuliaConstraints/ConstraintModels.jl.git |
|
[
"MIT"
] | 0.1.8 | 4a8a14f9a08181a1a0ccb1bafe7f0a233360a198 | code | 275 | using CBLS
using ConstraintModels
using Dictionaries
using JuMP
using LocalSearchSolvers
using MathOptInterface
using Test
@testset "ConstraintModels.jl" begin
include("instances.jl")
include("raw_solver.jl")
include("MOI_wrapper.jl")
include("JuMP.jl")
end
| ConstraintModels | https://github.com/JuliaConstraints/ConstraintModels.jl.git |
|
[
"MIT"
] | 0.1.8 | 4a8a14f9a08181a1a0ccb1bafe7f0a233360a198 | docs | 912 | # ConstraintModels
[](https://JuliaConstraints.github.io/ConstraintModels.jl/stable)
[](https://JuliaConstraints.github.io/ConstraintModels.jl/dev)
[](https://github.com/JuliaConstraints/ConstraintModels.jl/actions)
[](https://codecov.io/gh/JuliaConstraints/ConstraintModels.jl)
[](https://github.com/invenia/BlueStyle)
[](https://github.com/SciML/ColPrac)
| ConstraintModels | https://github.com/JuliaConstraints/ConstraintModels.jl.git |
|
[
"MIT"
] | 0.1.8 | 4a8a14f9a08181a1a0ccb1bafe7f0a233360a198 | docs | 224 | ```@meta
CurrentModule = ConstraintModels
```
# ConstraintModels
Documentation for [ConstraintModels](https://github.com/JuliaConstraints/ConstraintModels.jl).
```@index
```
```@autodocs
Modules = [ConstraintModels]
```
| ConstraintModels | https://github.com/JuliaConstraints/ConstraintModels.jl.git |
|
[
"MIT"
] | 0.3.4 | 4c11fcf40b7cd7015b82e1ff7d5426fd7f145eb9 | code | 709 | using Documenter
using Plots
push!(LOAD_PATH,"../src/")
using UnderwaterAcoustics
makedocs(
sitename = "UnderwaterAcoustics.jl",
format = Documenter.HTML(prettyurls = false),
linkcheck = !("skiplinks" in ARGS),
pages = Any[
"Home" => "index.md",
"Manual" => Any[
"uw_basic.md",
"pm_basic.md",
"pm_envref.md",
"pm_api.md"
],
"Propagation models" => Any[
"pm_pekeris.md"
],
"Tutorials" => Any[
"tut_turing.md",
"tut_autodiff.md"
]
]
)
deploydocs(
repo = "github.com/org-arl/UnderwaterAcoustics.jl.git",
branch = "gh-pages",
devbranch = "master",
devurl = "dev",
versions = ["stable" => "v^", "v#.#", "dev" => "dev"]
)
| UnderwaterAcoustics | https://github.com/org-arl/UnderwaterAcoustics.jl.git |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.