licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 0.1.0 | 716c364edd604db0fbc497f9a8b93f8d700050af | docs | 152 | # API/Reference
## Index
```@index
```
```@docs
SimpleOrbit
KeplerianOrbit
relative_position
Orbits.separation
Orbits.position_angle
Orbits.flip
```
| Orbits | https://github.com/JuliaAstro/Orbits.jl.git |
|
[
"MIT"
] | 0.1.0 | 716c364edd604db0fbc497f9a8b93f8d700050af | docs | 2884 |
# Getting Started
## Keplerian Orbits
Let's dive straight into some of the features Orbits.jl offers. Keplerian orbits are the backbone of astrodynamics, and we provide a "kitchen-sink" style [`KeplerianOrbit`](@ref). This means it wall try and parse whichever keyword arguments you provide, with units, uncertainties, and more thanks to Julia's composability. Here we present the orbital solution for the binary system SAO 136799, as derived by Tokovinin et al. 2015[^1]
```@example kep
using Measurements
using Orbits
using Plots
using Unitful
using UnitfulAstro
using UnitfulRecipes
distance = inv(6.92e-3)u"pc"
orbit = KeplerianOrbit(;
period = (40.57 ± 0.19)u"yr",
ecc = 0.42 ± 0.009,
Omega = (318.6 ± 0.6)u"°",
tp = (1972.12 ± 0.16)u"yr",
incl = (54.7 ± 0.6)u"°",
a = (0.154 ± 0.001)u"arcsecond" * distance |> u"AU",
omega = (72.6 ± 0.8)u"°",
)
plot(orbit; label="")
scatter!([0], [0], c=:black, marker=:+, lab="SAO 136799A")
```
we can show the orbit in sky angles by providing the distance to the system
```@example kep
plot(orbit; label="", distance)
scatter!([0], [0], c=:black, marker=:+, lab="SAO 136799A")
```
## Calculating ephemerides
Using out above orbit, let's figure out the position of the secondary star on a specific date
```@example kep
using Dates
function year_as_decimal(date::DateTime)
year_start = DateTime(Dates.year(date), 1, 1)
year_end = DateTime(Dates.year(date) + 1, 1, 1)
fraction = (date - year_start) / (year_end - year_start)
return (Dates.year(year_start) + fraction)u"yr"
end
obs_time = DateTime(2022, 2, 19, 10, 29, 45)
time = year_as_decimal(obs_time)
```
```@example kep
pos = relative_position(orbit, time)
# convert to angles for plot
ra, dec, _ = @. pos / distance |> u"arcsecond"
scatter!([ra], [dec], lab="SAO 136799B")
```
## Getting binary parameters
Continuing our above example, let's calculate the position angle and separation of the binary at the observing date above
```@example kep
using Orbits: position_angle, separation
pa = position_angle(orbit, time)
sep = separation(orbit, time) / distance |> u"arcsecond"
pa, sep
```
let's show that with a polar plot; keep in mind the polar plot has 0 degrees as the positive x-axis, but parallactic angles start at the axis with the north celestial pole, which is 90 degrees in the polar plot.
```@example kep
scatter([deg2rad(pa - 270)], [sep], proj=:polar, lab="SAO 136799B")
```
### SkyCoords.jl
These ephemerides can be translated into SkyCoords easily-
```@example kep
using AstroAngles
using SkyCoords
origin = ICRSCoords(dms"09 22 50.8563427", hms"-09 50 19.659199")
```
```@example kep
using Measurements: value
coord = offset(origin, value(sep), deg2rad(value(pa)))
```
[^1]: Tokovinin et al. (2015) "Speckle Interferometry at SOAR in 2014" ([ads](https://ui.adsabs.harvard.edu/abs/2015AJ....150...50T))
| Orbits | https://github.com/JuliaAstro/Orbits.jl.git |
|
[
"MIT"
] | 0.1.0 | 716c364edd604db0fbc497f9a8b93f8d700050af | docs | 2097 | ```@meta
CurrentModule = Orbits
```
# Orbits.jl
[](https://github.com/juliaastro/Orbits.jl)
[](https://github.com/juliaastro/Orbits.jl/actions)
[](https://juliaci.github.io/NanosoldierReports/pkgeval_badges/report.html)
[](https://codecov.io/gh/juliaastro/Orbits.jl)
[](https://opensource.org/licenses/MIT)
[](https://juliaastro.github.io/Orbits.jl/stable)
[](https://juliaastro.github.io/Orbits.jl/dev)
Flexible and fast astronomical orbits (originally a submodule of [Transits.jl](https://github.com/JuliaAstro/Transits.jl)).
The goals of this package are, in this order:
* have a simple interface with high *composability*
* be flexible with respect to numeric types and application
* be fully compatible with [ChainRules.jl](https://github.com/juliadiff/ChainRules.jl) automatic differentiation (AD) system to leverage the derived analytical gradients
* provide a codebase that is well-organized, instructive, and easy to extend
* maintain high performance: at least as fast as similar tools
## Installation
To install use [Pkg](https://julialang.github.io/Pkg.jl/v1/managing-packages/). From the REPL, press `]` to enter Pkg-mode
```julia
pkg> add Orbits
```
If you want to use the most up-to-date version of the code, check it out from `main`
```julia
pkg> add Orbits#main
```
## Contributing and Support
If you would like to contribute, feel free to open a [pull request](https://github.com/JuliaAstro/Orbits.jl/pulls). If you want to discuss something before contributing, head over to [discussions](https://github.com/JuliaAstro/Orbits.jl/discussions) and join or open a new topic.
| Orbits | https://github.com/JuliaAstro/Orbits.jl.git |
|
[
"MIT"
] | 0.3.3 | 9dde360267e7f5e0c292bbe14b7d034f7e617031 | code | 10121 | module Readables
export Readable,
readable, readablestring,
decpoint, setdecpoint,
intsep, setintsep, intgroup, setintgroup,
fracsep, setfracsep, fracgroup, setfracgroup
const IMAG_UNIT_STR = ["𝛊"]
const DUAL_UNIT_STR = ["ε"]
const IntGroupSize = Ref(3)
const FracGroupSize = Ref(5)
const IntSepChar = Ref(',')
const FracSepChar = Ref('_')
const DecimalPoint = Ref('.')
mutable struct Readable
intgroupsize::Int
fracgroupsize::Int
intsepchar::Char
fracsepchar::Char
decimalpoint::Char
function Readable(intgroupsize::Int, fracgroupsize::Int, intsepchar::Char, fracsepchar::Char, decimalpoint::Char)
if !(0 < intgroupsize && 0 < fracgroupsize)
throw(ErrorException("groups must be > 0 ($intgroupsize, $fracgroupsize)"))
end
return new(intgroupsize, fracgroupsize, intsepchar, fracsepchar, decimalpoint)
end
end
Readable(;intgroup::Int=IntGroupSize[],
fracgroup::Int=FracGroupSize[],
intsep::Char=IntSepChar[],
fracsep::Char=FracSepChar[],
decimalpoint::Char=DecimalPoint[]
) =
Readable(intgroup, fracgroup, intsep, fracsep, decimalpoint)
const baseprefixes = Dict(2=>"0b", 8=>"0o", 10=>"", 16=>"0x")
function baseprefix(x::Int)
res = get(baseprefixes, x, nothing)
res === nothing && throw(ErrorException("base $x is not supported"))
return res
end
# ---- ---- ---- ----
const READABLE = Readable()
decpoint(x::Readable) = x.decimalpoint
intsep(x::Readable) = x.intsepchar
intgroup(x::Readable) = x.intgroupsize
fracsep(x::Readable) = x.fracsepchar
fracgroup(x::Readable) = x.fracgroupsize
function setdecpoint(x::Readable, decpoint::Char)
x.decimalpoint = decpoint
return x
end
function setintsep(x::Readable, intsep::Char)
x.intsepchar = intsep
return x
end
function setintgroup(x::Readable, intgroup::Int)
x.intgroupsize = intgroup
return x
end
function setfracsep(x::Readable, fracsep::Char)
x.fracsepchar = fracsep
return x
end
function setfracgroup(x::Readable, fracgroup::Int)
x.fracgroupsize = fracgroup
return x
end
# ---- ---- ---- ----
function readablestring(x::T; base::Int=10, sepwith::Union{Nothing,Char}=nothing, groupby::Union{Nothing,Int}=nothing) where {T}
if isnothing(sepwith)
if isnothing(groupby)
readablestring(x, base=base)
else
readablestring(Readable(groupby=groupby, sepwith = intsep(READABLE)), x, base=base)
end
elseif isnothing(groupby)
readablestring(Readable(sepwith=sepwith, groupby=intgroup(READABLE)), x, base=base)
else
readablestring(Readable(groupby=groupby, sepwith=sepwith), x, base=base)
end
end
readablestring(r::Readable, x::T, base::Int=10) where {T<:Signed} =
readable_int(r, x, base)
readablestring(x::T, base::Int=10) where {T<:Signed} =
readablestring(READABLE, x, base)
function readablestring(r::Readable, x::T, base::Int=10) where {T<:AbstractFloat}
str = string(x)
return readablestring(r, str, base)
end
readablestring(x::T, base::Int=10) where {T<:AbstractFloat} =
readablestring(READABLE, x, base)
function readable(io::IO, r::Readable, x::T, base::Int=10) where {T<:Signed}
str = readablestring(r, x, base)
print(io, str)
end
readable(io::IO, x::T, base::Int=10) where {T<:Signed} =
readable(io, READABLE, x, base)
readable(r::Readable, x::T, base::Int=10) where {T<:Signed} =
readable(Base.stdout, r, x, base)
readable(x::T, base::Int=10) where {F, T<:Signed} =
readable(Base.stdout, READABLE, x, base)
function readable(io::IO, r::Readable, x::T, base::Int=10) where {T<:AbstractFloat}
str = readablestring(r, x, base)
print(io, str)
end
readable(io::IO, x::T, base::Int=10) where {T<:AbstractFloat} =
readable(io, READABLE, x, base)
readable(r::Readable, x::T, base::Int=10) where {T<:AbstractFloat} =
readable(Base.stdout, r, x, base)
readable(x::T, base::Int=10) where {F, T<:AbstractFloat} =
readable(Base.stdout, READABLE, x, base)
function readablestring(r::Readable, x::T, base::Int=10) where {T<:Real}
str = string(x)
return readablestring(r, str, base)
end
readablestring(x::T, base::Int=10) where {T<:Real} =
readablestring(READABLE, x, base)
function readable(io::IO, r::Readable, x::T, base::Int=10) where {T<:Real}
str = readablestring(r, x, base)
print(io, str)
end
readable(io::IO, x::T, base::Int=10) where {T<:Real} =
readable(io, READABLE, x, base)
readable(r::Readable, x::T, base::Int=10) where {T<:Real} =
readable(Base.stdout, r, x, base)
readable(x::T, base::Int=10) where {F, T<:Real} =
readable(Base.stdout, READABLE, x, base)
function readablestring(r::Readable, x::T, base::Int=10) where {F, T<:Complex{F}}
re = real(x)
im = imag(x)
sgn = signbit(im) ? " - " : " + "
im = abs(im)
re_str = readable(r, string(re), base)
im_str = readable(r, string(im), base)
return string(re_str, sgn, im_str, IMAG_UNIT_STR[1])
end
readablestring(x::T, base::Int=10) where {F, T<:Complex{F}} =
readablestring(READABLE, x, base)
function readable(io::IO, r::Readable, x::T, base::Int=10) where {F, T<:Complex{F}}
str = readablestring(r, x, base)
print(io, str)
end
readable(io::IO, x::T, base::Int=10) where {F, T<:Complex{F}} =
readable(io, READABLE, x, base)
readable(r::Readable, x::T, base::Int=10) where {F, T<:Complex{F}} =
readable(Base.stdout, r, x, base)
readable(x::T, base::Int=10) where {F, T<:Complex{F}} =
readable(Base.stdout, READABLE, x, base)
function readablestring(r::Readable, x::T, base::Int=10) where {T<:Number}
if hasmethod(real, (T,))
re = real(x)
if hasmethod(imag, (T,))
im = imag(x)
if isa(im, Real)
readablestring(r, re, im, IMAG_UNIT_STR[1], base)
else
throw(DomainError("$T is not supported"))
end
elseif hasmethod(dual, (T,))
du = dual(x)
if isa(im, Real)
readablestring(r, re, du, DUAL_UNIT_STR[1], base)
else
throw(DomainError("$T is not supported"))
end
else
throw(DomainError("$T is not supported"))
end
else
throw(DomainError("$T is not supported"))
end
end
readablestring(x::T, base::Int=10) where {T<:Number} =
readablestring(READABLE, x, base)
function readablestring(r::Readable, x::T, y::T, unitstr::String, base::Int=10) where {T<:Real}
sgn = signbit(y) ? " - " : " + "
y = abs(y)
xstr = readablestring(r, x, base)
ystr = readablestring(r, y, base)
return string(xstr, sgn, ystr, unitstr)
end
readablestring(x::T, y::T, unitstr::String, base::Int=10) where {T<:Real} =
readablestring(READABLE, x, y, unitstr, base)
function readable(io::IO, r::Readable, x::T, base::Int=10) where {T<:Number}
str = readablestring(r, x, base)
print(io, str)
end
readable(io::IO, x::T, base::Int=10) where {T<:Number} =
readable(io, READABLE, x, base)
readable(r::Readable, x::T, base::Int=10) where {T<:Number} =
readable(Base.stdout, r, x, base)
readable(x::T, base::Int=10) where {T<:Number} =
readable(Base.stdout, READABLE, x, base)
splitstr(str::AbstractString, at::Union{String, Char}) = String.(split(str, at))
stripstr(str::AbstractString) = String(strip(str))
function readablestring(r::Readable, str::String, base::Int=10)
if !occursin(READABLE.decimalpoint, str)
readable_int(r, BigInt(str), base)
else
ipart, fpart = splitstr(str, READABLE.decimalpoint)
if occursin("e", fpart)
fpart, epart = splitstr(fpart, "e")
epart = (epart[1] !== '-' && epart[1] !== '+') ? string("e+", epart) : string("e", epart)
else
epart = ""
end
ripart = readable_int(r, BigInt(ipart), base)
rfpart = readable_frac(r, BigInt(fpart), base)
string(ripart, r.decimalpoint, rfpart, epart)
end
end
readablestring(x::String, base::Int=10) =
readablestring(READABLE, x, base)
function readable_int(r::Readable, x::I, base::Int=10) where {I<:Signed}
numsign = signbit(x) ? "-" : ""
str = string(abs(x), base=base)
ndigs = length(str)
ngroups, firstgroup = divrem(ndigs, r.intgroupsize)
ngroups == 0 && return str
idx = firstgroup
if idx > 0
res = string(str[1:idx], r.intsepchar)
else
res = ""
end
while ngroups > 1
res = string(res, str[idx+1:idx+r.intgroupsize], r.intsepchar)
idx += r.intgroupsize
ngroups -= 1
end
res = string(res, str[idx+1:idx+r.intgroupsize])
return string(numsign, baseprefix(base), res)
end
readable_int(x::I, base::Int=10) where {I<:Signed} = readable(READABLE, x, base)
function readable_frac(r::Readable, x::I, base::Int=10) where {I<:Signed}
signbit(x) && throw(ErrorException("negative fractional parts ($x) are not allowed"))
str = string(abs(x), base=base)
ndigs = length(str)
ngroups, lastgroup = divrem(ndigs, r.fracgroupsize)
ngroups == 0 && return str
idx = 0
res = ""
while ngroups > 1
res = string(res, str[idx+1:idx+r.fracgroupsize], r.fracsepchar)
idx += r.fracgroupsize
ngroups -= 1
end
if lastgroup == 0
res = string(res, str[idx+1:end])
else
res = string(res, str[idx+1:idx+r.fracgroupsize], r.fracsepchar, str[idx+r.fracgroupsize+1:end])
end
return res
end
readable_frac(x::I, base::Int=10) where {I<:Signed} = readable_frac(READABLE, x, base)
function Base.BigInt(str::AbstractString)
s = stripstr(str)
nchars = length(s)
prec = ceil(Int, log2(10) * nchars) + 16
holdprec = precision(BigFloat)
setprecision(BigFloat, prec)
res = BigInt(BigFloat(s))
setprecision(BigFloat, holdprec)
return res
end
Base.BigInt(str::SubString) = BigInt(String(str))
end # Readables
| Readables | https://github.com/JeffreySarnoff/Readables.jl.git |
|
[
"MIT"
] | 0.3.3 | 9dde360267e7f5e0c292bbe14b7d034f7e617031 | code | 1460 | using Readables, Test
# using the default settings
setreadables!()
@test readablestring(0) == "0"
@test readablestring(12) == "12"
@test readablestring(123) == "123"
@test readablestring(1234) == "1,234"
@test readablestring(12345) == "12,345"
@test readablestring(123456) == "123,456"
@test readablestring(1234567) == "1,234,567"
@test readablestring(0.0) == "0.0"
@test readablestring(12.0) == "12.0"
@test readablestring(123.0) == "123.0"
@test readablestring(1234.0) == "1,234.0"
@test readablestring(12345.0) == "12,345.0"
@test readablestring(123456.0) == "123,456.0"
@test readablestring(1234567.0) == "1.23456_7e+6"
@test readablestring(0.12345) == "0.12345"
@test readablestring(12.12345) == "12.12345"
@test readablestring(123.12345) == "123.12345"
@test readablestring(1234.12345) == "1,234.12345"
@test readablestring(12345.12345) == "12,345.12345"
@test readablestring(123456.12345) == "123,456.12345"
@test readablestring(1234567.12345) == "1.23456_71234_5e+6"
@test readablestring(0.12345678) == "0.12345_678"
@test readablestring(12.12345678) == "12.12345_678"
@test readablestring(123.12345678) == "123.12345_678"
@test readablestring(1234.12345678) == "1,234.12345_678"
@test readablestring(12345.12345678) == "12,345.12345_678"
@test readablestring(123456.12345678) == "123,456.12345_678"
@test readablestring(1234567.12325675) == "1.23456_71232_5675e+6"
@test readablestring(BigFloat("1234567.12325675")) == "1.23456_71232_5675e+06"
| Readables | https://github.com/JeffreySarnoff/Readables.jl.git |
|
[
"MIT"
] | 0.3.3 | 9dde360267e7f5e0c292bbe14b7d034f7e617031 | docs | 1789 | # Readables.jl [ do not use #master, pending revision ]
### Make extended precision numbers readable.
| Copyright © 2018 by Jeffrey Sarnoff. | This work is made available under The MIT License. |
|:--------------------------------------|:------------------------------------------------:|
-----
[](https://travis-ci.org/JeffreySarnoff/Readables.jl)
----
## Use
```julia
using Readables
setprecision(BigFloat, 160)
macro twoways(val)
:(println(string("\n\t", $val, "\n\t", readablestring($val))))
end
```
```julia
val = (pi/2)^9; @twoways(val)
58.22089713563711
58.22089_71356_3711
val = (BigFloat(pi)/2)^9; @twoways(val)
58.220897135637132161151176564921201882554800340637
58.22089,71356,37132,16115,11765,64921,20188,25548,00340,637
setprecision(BigFloat, 192)
val = (BigFloat(pi))^115; ival = trunc(BigInt, val); @twoways(ival)
1486741142588149449007460570055579083524909316281177999404
1,486,741,142,588,149,449,007,460,570,055,579,083,524,909,316,281,177,999,404
```
## Customize
```julia
config = Readable()
config = setintgroup(config, 6)
config = setintsep(config, '⚬')
ival = trunc(BigInt, (BigFloat(pi))^64);
readable(config, ival)
"65704006:445717084572:022626334540"
```
## Configure
We assume a `Real` value has an integer componant and a fractional componant (either may be zero).
`intgroup, fracgroup` is the number of digits used to form digit subsequences in the integer and fractional parts
`intsep, fracsep` is the `Char` used to separate groups in the integer and fractional parts
### exported configurables
- decpoint, setdecpoint
- intsep, fracsep, setintsep, setfracsep
- intgroup, fracgroup, setintgroup, setfracgroup
----
| Readables | https://github.com/JeffreySarnoff/Readables.jl.git |
|
[
"MIT"
] | 0.5.0 | a8c5cb40f6eb86a4cc506329aebd245f186202df | code | 302 | push!(LOAD_PATH, "../src/")
using Documenter, JefimenkoModels
makedocs(sitename="JefimenkoModels.jl",
authors = "Michael Ingold",
pages = [
"Home" => "index.md",
"Tutorial" => "tutorial.md",
"Reference" => "reference.md"
]
)
deploydocs(repo="github.com/mikeingold/JefimenkoModels.jl")
| JefimenkoModels | https://github.com/mikeingold/JefimenkoModels.jl.git |
|
[
"MIT"
] | 0.5.0 | a8c5cb40f6eb86a4cc506329aebd245f186202df | code | 3493 | ################################################################################
# BUILD THE MODEL
################################################################################
using JefimenkoModels
using Unitful, UnitfulCoordinateSystems
using Unitful.DefaultSymbols: V, A, m, ns, s
model_disk = let
# Surface disk source with radius 0.5m
# Electric current only: spatially-uniform, x-directed, driven by a transient pulse
ρ₀ = 0.5m
(t₀_s, f₀_Hz, β₀) = (5.0e-9, 500e6, 1.25)
sig(t_s::Real) = sin(2π*f₀_Hz*t_s) * exp(-β₀*(f₀_Hz*t_s)^2)
Je(r̄::AbstractCoordinate, t_s::Real) = x̂ .* sig(t_s-t₀_s) # t in s -> Jₑ in A
source = SurfaceSource_Disk{Float64}(ρ₀, NULL_CHARGE, NULL_CHARGE, Je, NULL_CURRENT)
metadata = Dict(:description=>"Uniform current over a 0.5m disk, stimulated by transient pulse signal.")
JefimenkoModel{Float64}(CLASSICAL_VACUUM, [source], metadata)
end
################################################################################
# RUN THE MODEL
################################################################################
# Observation location and time domain of interest
r = CoordinateCartesian(0.0m, 0.0m, 1.5m)
t = range(0.0ns, 20.0ns, length=800)
# Calculate the fields at r over the time domain
efield = map(t -> E(r,t,model_disk), t)
hfield = map(t -> H(r,t,model_disk), t)
################################################################################
# PLOT THE DATA
################################################################################
using Plots
# Accessor functions
e(i) = map(e -> getindex(e,i), efield)
h(i) = map(h -> getindex(h,i), hfield)
common_formatting = Dict(
# Major grid
:gridwidth => 1,
:gridalpha => 0.2,
:gridstyle => :dash,
# Minor grid
:minorgrid => true,
:minorgridalpha => 0.15,
:minorgridwidth => 1,
:minorgridstyle => :dot,
)
jx(t::Unitful.Time) = model_disk.sources[1].Jₑ(r, ustrip(s,t))[1]
jy(t::Unitful.Time) = model_disk.sources[1].Jₑ(r, ustrip(s,t))[2]
jz(t::Unitful.Time) = model_disk.sources[1].Jₑ(r, ustrip(s,t))[3]
# Plot the source current density
p1 = plot(t, [jx.(t), jy.(t), jz.(t)], label=["Jx" "Jy" "Jz"], linewidth=3,
title="Source Current Density (Spatially-Uniform)",
xlims=(0,20), xticks=0:4:20, xminorticks=4,
ylims=(-1,1), yticks=-1:0.25:1, yminorticks=2,
; common_formatting...)
plot!(p1, xlabel="Time [ns]", ylabel="Magnitude [A/m]")
savefig(p1, joinpath(@__DIR__,"disk_fig_source.png"))
# Plot the electric field at the observation point
p2 = plot(t, [e(1), e(2), e(3)], label=["Ex" "Ey" "Ez"], linewidth=3,
title="Electric Field (z = 1.5m)", framestyle=:zerolines,
xlims=(0,20), xticks=0:4:20, xminorticks=4,
ylims=(-200,150), yticks=-200:50:150, yminorticks=2,
; common_formatting...)
plot!(p2, xlabel="Time [ns]", ylabel="Magnitude [V/m]")
savefig(p2, joinpath(@__DIR__,"disk_fig_efield.png"))
# Plot the magnetic field at the observation point
p3 = plot(t, [h(1), h(2), h(3)], label=["Hx" "Hy" "Hz"], linewidth=3,
title="Magnetic Field (z = 1.5m)", framestyle=:zerolines,
xlims=(0,20), xticks=0:4:20, xminorticks=4,
ylims=(-0.5,0.5), yticks=-0.5:0.1:0.5, yminorticks=2,
; common_formatting...)
plot!(p3, xlabel="Time [ns]", ylabel="Magnitude [A/m]")
savefig(p3, joinpath(@__DIR__,"disk_fig_hfield.png"))
| JefimenkoModels | https://github.com/mikeingold/JefimenkoModels.jl.git |
|
[
"MIT"
] | 0.5.0 | a8c5cb40f6eb86a4cc506329aebd245f186202df | code | 7308 | module JefimenkoModels
using Unitful, UnitfulCoordinateSystems
using Unitful.DefaultSymbols: W, A, V, C, m, s, rad
using PhysicalConstants.CODATA2018: c_0, ε_0, μ_0
using ForwardDiff, Integrals, LinearAlgebra, StaticArrays
__DEFAULT_RTOL = sqrt(eps())
###########################################################################
# DATA STRUCTURES & COMMON DEFAULTS
###########################################################################
# Data structures
include("structs.jl")
include("accessors.jl")
CLASSICAL_VACUUM = let
ε₀ = uconvert((A*s)/(V*m), float(ε_0))
μ₀ = uconvert((V*s)/(A*m), float(μ_0))
c₀ = uconvert(m/s, float(c_0))
PropagationMedia_Simple(ε₀, μ₀, c₀)
end
NULL_CHARGE(r̄::AbstractCoordinate, t_s::Real) = 0
NULL_CURRENT(r̄::AbstractCoordinate, t_s::Real) = StaticArrays.SVector(0, 0, 0)
export CLASSICAL_VACUUM, NULL_CHARGE, NULL_CURRENT
###########################################################################
# RETARDED-TIME CALCULATIONS
###########################################################################
"""
t′(r̄::AbstractCoordinate, t:Time, r̄′::Coordinate, c::Quantity)
Calculate the retarded-time at a source point `r̄′` for an observer at the space-time
point (`r̄`,`t`) through a medium with speed of light `c`.
# Arguments
- `r̄::UnitfulCoordinateSystems.AbstractCoordinate`: spatial location of the observation point
- `t::Unitful.Time`: time at the observation point
- `r̄′::UnitfulCoordinateSystems.AbstractCoordinate`: spatial location of the source point
- `c::Quantity`: Unitful speed of light in the medium between r̄′ and r̄
"""
function t′(r̄::AbstractCoordinate, t::Unitful.Time, r̄′::AbstractCoordinate, c::Quantity)::Unitful.Time
return (t - (norm(r̄-r̄′)/c))
end
"""
t′(r̄::Coordinate, t:Time, r̄′::Coordinate, media::PropagationMedia)
Calculate the retarded-time at a source point `r̄′` for an observer at the space-time
point (`r̄`,`t`) through a `propagation medium`.
# Arguments
- `r̄::UnitfulCoordinateSystems.Coordinate`: spatial location of the observation point
- `t::Unitful.Time`: time at the observation point
- `r̄′::UnitfulCoordinateSystems.Coordinate`: spatial location of the source point
- `media::PropagationMedia`: properties of the medium between r̄′ and r̄
"""
function t′(r̄::AbstractCoordinate, t::Unitful.Time, r̄′::AbstractCoordinate, media::PropagationMedia_Simple)::Unitful.Time
return t′(r̄, t, r̄′, media.c)
end
function t′(r̄::AbstractCoordinate, t::Unitful.Time, r̄′::AbstractCoordinate, media::PropagationMedia_DiagonallyAnisotropic)::Unitful.Time
Δr̄ = SVector(r̄ - r̄′)
Δt = norm(media.c^-1 * Δr̄) |> unit(t)
return (t - Δt)
end
export t′
###########################################################################
# EM FIELD CALCULATIONS
###########################################################################
"""
H(r̄::AbstractCoordinate, t::Time, model::JefimenkoModel; rtol=sqrt(eps))
Calculate the predicted electric field 𝐇 observed at space-time point (`r̄`,`t`) using
the electric Jefimenko equation for a particular `model`. Calculate the integral using
a specified `relative tolerance`.
# Arguments
- `r̄::UnitfulCoordinateSystems.AbstractCoordinate`: spatial location of the observation point
- `t::Unitful.Time`: time at which the electric field is observed
- `model::JefimenkoModel`: model of the transmitting source and propagation media
# Keywords
- `rtol::Real`: relative tolerance at which to solve the integral (optional)
"""
function E(r̄::AbstractCoordinate, t::Unitful.Time, model::JefimenkoModel; rtol=__DEFAULT_RTOL)
# Superimpose the contributions of the E(r̄,t) produced by each source in model
E_contrib(source) = __E(r̄, t, source, model.media; rtol=rtol)
return mapreduce(E_contrib, +, model.sources)
end
"""
H(r̄::AbstractCoordinate, t::Time, model::JefimenkoModel; rtol=sqrt(eps))
Calculate the predicted magnetic field 𝐇 observed at space-time point (`r̄`,`t`) using
the magnetic Jefimenko equation for a particular `model`. Calculate the integral using
a specified `relative tolerance`.
# Arguments
- `r̄::UnitfulCoordinateSystems.AbstractCoordinate`: spatial location of the observation point
- `t::Unitful.Time`: time at which the field is observed
- `model::JefimenkoModel`: model of the transmitting source and propagation media
# Keywords
- `rtol::Real`: relative tolerance at which to solve the integral (optional)
"""
function H(r̄::AbstractCoordinate, t::Unitful.Time, model::JefimenkoModel; rtol=__DEFAULT_RTOL)
# Superimpose the contributions of the 𝐇(r̄,t) produced by each source in model
H_contrib(source) = __H(r̄, t, source, model.media; rtol=rtol)
return mapreduce(H_contrib, +, model.sources)
end
"""
P(r̄::AbstractCoordinate, t::Time, model::JefimenkoModel; rtol=sqrt(eps))
Calculate the predicted Poynting vector 𝐏 observed at space-time point (`r̄`,`t`) using
the electric and magnetic Jefimenko equations for a particular `model`. Calculate the
integrals using a specified `relative tolerance`.
# Arguments
- `r̄::UnitfulCoordinateSystems.AbstractCoordinate`: spatial location of the observation point
- `t::Unitful.Time`: time at which the field is observed
- `model::JefimenkoModel`: model of the transmitting source and propagation media
# Keywords
- `rtol::Real`: relative tolerance at which to solve the integral (optional)
"""
function P(r̄::AbstractCoordinate, t::Unitful.Time, model::JefimenkoModel; rtol=__DEFAULT_RTOL)
Ert = E(r̄,t,model; rtol=rtol)
Hrt = H(r̄,t,model; rtol=rtol)
return cross(Ert,Hrt) .|> W/m^2
end
"""
__P(r̄::AbstractCoordinate, t::Time, source::JefimenkoSource, media::PropagationMedia; rtol)
Calculate the predicted Poynting vector 𝐏 observed at space-time point (`r̄`,`t`) due to
a particular `source`, transmitted through a particular `propagation media`. Calculate
the integral using a specified `relative tolerance`.
# Arguments
- `r̄::UnitfulCoordinateSystems.AbstractCoordinate`: spatial location of the observation point
- `t::Unitful.Time`: time at which the electric field is observed
- `source::JefimenkoSource`: source of the electric field
- `media::PropagationMedia`: properties of the propagation media
# Keywords
- `rtol::Real`: relative tolerance at which to solve the integral (optional)
"""
function __P(r̄::AbstractCoordinate, t::Unitful.Time, source::AbstractJefimenkoSource{T},
media::AbstractPropagationMedia; rtol=__DEFAULT_RTOL) where {T<:AbstractFloat}
Ert = __E(r̄,t,source,media; rtol=rtol)
Hrt = __H(r̄,t,source,media; rtol=rtol)
return cross(Ert,Hrt) .|> W/m^2
end
include("integrands.jl")
include("fields_E.jl")
include("fields_H.jl")
export E, H, P
end
| JefimenkoModels | https://github.com/mikeingold/JefimenkoModels.jl.git |
|
[
"MIT"
] | 0.5.0 | a8c5cb40f6eb86a4cc506329aebd245f186202df | code | 4337 | ################################################################################
# ACCESSOR FUNCTIONS
################################################################################
function Base.getproperty(s::VolumeSource_Rectangular, sym::Symbol)
if sym in (:xlims, :ylims, :zlims) # included
return getfield(s, sym)
elseif sym in (:rho_e, :rho_h, :J_e, :J_h) # included
return getfield(s, sym)
elseif sym in (:ρₑ, :ρe) # aliases
return getfield(s, :rho_e)
elseif sym in (:ρₕ, :ρh) # aliases
return getfield(s, :rho_h)
elseif sym in (:Jₑ, :Je) # aliases
return getfield(s, :J_e)
elseif sym in (:Jₕ, :Jh) # aliases
return getfield(s, :J_h)
else # fallback
return getfield(s, sym)
end
end
function Base.getproperty(s::VolumeSource_Cylinder, sym::Symbol)
if sym in (:r, :philims, :zlims) # included
return getfield(s, sym)
elseif sym in (:rho_e, :rho_h, :J_e, :J_h) # included
return getfield(s, sym)
elseif sym in (:ρₑ, :ρe) # aliases
return getfield(s, :rho_e)
elseif sym in (:ρₕ, :ρh) # aliases
return getfield(s, :rho_h)
elseif sym in (:Jₑ, :Je) # aliases
return getfield(s, :J_e)
elseif sym in (:Jₕ, :Jh) # aliases
return getfield(s, :J_h)
else # fallback
return getfield(s, sym)
end
end
function Base.getproperty(s::VolumeSource_Sphere, sym::Symbol)
if sym in (:r, :thetalims, :philims) # included
return getfield(s, sym)
elseif sym in (:rho_e, :rho_h, :J_e, :J_h) # included
return getfield(s, sym)
elseif sym in (:ρₑ, :ρe) # aliases
return getfield(s, :rho_e)
elseif sym in (:ρₕ, :ρh) # aliases
return getfield(s, :rho_h)
elseif sym in (:Jₑ, :Je) # aliases
return getfield(s, :J_e)
elseif sym in (:Jₕ, :Jh) # aliases
return getfield(s, :J_h)
else # fallback
return getfield(s, sym)
end
end
function Base.getproperty(s::SurfaceSource_Rectangle, sym::Symbol)
if sym in (:xlims, :ylims) # included
return getfield(s, sym)
elseif sym in (:rho_e, :rho_h, :J_e, :J_h) # included
return getfield(s, sym)
elseif sym in (:ρₑ, :ρe) # aliases
return getfield(s, :rho_e)
elseif sym in (:ρₕ, :ρh) # aliases
return getfield(s, :rho_h)
elseif sym in (:Jₑ, :Je) # aliases
return getfield(s, :J_e)
elseif sym in (:Jₕ, :Jh) # aliases
return getfield(s, :J_h)
else # fallback
return getfield(s, sym)
end
end
function Base.getproperty(s::SurfaceSource_Disk, sym::Symbol)
if sym in (:r) # included
return getfield(s, sym)
elseif sym in (:rho_e, :rho_h, :J_e, :J_h) # included
return getfield(s, sym)
elseif sym in (:ρ₀, :ρ_0, :rho_0) # aliases
return getfield(s, :r)
elseif sym in (:ρₑ, :ρe) # aliases
return getfield(s, :rho_e)
elseif sym in (:ρₕ, :ρh) # aliases
return getfield(s, :rho_h)
elseif sym in (:Jₑ, :Je) # aliases
return getfield(s, :J_e)
elseif sym in (:Jₕ, :Jh) # aliases
return getfield(s, :J_h)
else # fallback
return getfield(s, sym)
end
end
function Base.getproperty(s::LineSource_Straight, sym::Symbol)
if sym in (:a, :b) # included
return getfield(s, sym)
elseif sym in (:rho_e, :rho_h, :J_e, :J_h) # included
return getfield(s, sym)
elseif sym in (:ā) # aliases
return getfield(s, :a)
elseif sym in (:b̄) # aliases
return getfield(s, :b)
elseif sym in (:ρₑ, :ρe) # aliases
return getfield(s, :rho_e)
elseif sym in (:ρₕ, :ρh) # aliases
return getfield(s, :rho_h)
elseif sym in (:Jₑ, :Je) # aliases
return getfield(s, :J_e)
elseif sym in (:Jₕ, :Jh) # aliases
return getfield(s, :J_h)
else # fallback
return getfield(s, sym)
end
end
function Base.getproperty(m::AbstractPropagationMedia, sym::Symbol)
if sym in (:epsilon, :mu, :c) # included
return getfield(s, sym)
elseif sym in (:ε, :ϵ) # aliases
return getfield(s, :epsilon)
elseif sym in (:μ) # aliases
return getfield(s, :mu)
else # fallback
return getfield(s, sym)
end
end
| JefimenkoModels | https://github.com/mikeingold/JefimenkoModels.jl.git |
|
[
"MIT"
] | 0.5.0 | a8c5cb40f6eb86a4cc506329aebd245f186202df | code | 5858 | ###########################################################################
# LINEAR SOURCES
###########################################################################
"""
__E(r̄::AbstractCoordinate, t::Time, source::AbstractJefimenkoSource,
media::PropagationMedia; rtol=sqrt(eps))
Calculate the electric field at (`r̄`,`t`) using the electric Jefimenko equation due to a
particular `source`, transmitted through a particular homogeneous `propagation media`.
Calculate the integral using a specified `relative tolerance`.
# Arguments
- `r̄::UnitfulCoordinateSystems.AbstractCoordinate`: spatial location of the observation point
- `t::Unitful.Time`: time at which the electric field is observed
- `source::AbstractJefimenkoSource{T}`: source of the electric field
- `media::PropagationMedia`: properties of the propagation media
# Keywords
- `rtol::Real`: relative tolerance at which to solve the integral (optional)
"""
function __E(r̄::AbstractCoordinate, t::Unitful.Time, source::LineSource_Straight{T},
media::PropagationMedia_Simple; rtol=__DEFAULT_RTOL) where {T<:AbstractFloat}
# Calculate the length of the line source from starting point ā to ending point b̄
dmax::Unitful.Length = norm(source.b̄ - source.ā)
# Calculate the integrand E-field vector in implied units [V/m²]
function integrand_Vm2(u, p)::SVector{3,T}
d::Unitful.Length = u * m
# Parameterize a straight line from ā to b̄ according to the distance traveled
# Start at ā, progress the specified distance in direction û
û = (source.b̄ - source.ā) ./ dmax
r̄′::CoordinateCartesian = source.ā + (d .* û)
return __integrand_E(r̄′; source=source, media=media,
r̄=CoordinateCartesian(r̄), t=t)::SVector{3,T}
end
# Define the integrand as a f(d) traveled along line source, solve it
prob = IntegralProblem(integrand_Vm2, zero(T), ustrip(T,m,dmax))
sol = solve(prob, QuadGKJL(), reltol=rtol)
return ( (1/4π) .* (sol.u) .* (V/m) ) # in [V/m² * m] -> [V/m]
end
###########################################################################
# SURFACE SOURCES
###########################################################################
function __E(r̄::AbstractCoordinate, t::Unitful.Time, source::SurfaceSource_Disk{T},
media::PropagationMedia_Simple; rtol=__DEFAULT_RTOL) where {T<:AbstractFloat}
function disk_integrand_Vm2(u, p)::SVector{3,T}
# Convert given (ρ[m],φ[rad]) to a Coordinate
r̄′ = CoordinateCartesian(CoordinatePolar(u[1]*m, u[2]*rad))
# Return integrand scaled by the radial integration factor
return (__integrand_E(r̄′; source=source, media=media,
r̄=CoordinateCartesian(r̄), t=t) .* u[1])::SVector{3,T}
end
# Get integration limits: ρ ∈ [0,ρ₀], ϕ ∈ [0,2π]
ρ₀_m = ustrip(T, m, source.ρ₀)
lb = [zero(T), zero(T)]
ub = [T(ρ₀_m), T(2π)]
# Define and solve the integral problem over a circular aperture
prob = IntegralProblem(disk_integrand_Vm2, lb, ub)
sol = solve(prob, HCubatureJL(), reltol=rtol) # implied units [V/m² * m] -> [V/m]
return ( (1/4π) .* (sol.u) .* (V/m) )
end
function __E(r̄::AbstractCoordinate, t::Unitful.Time, source::SurfaceSource_Rectangle{T},
media::PropagationMedia_Simple; rtol=__DEFAULT_RTOL) where {T<:AbstractFloat}
function integrand_Vm3(u, p)::SVector{3,T}
r̄′ = CoordinateCartesian(u[1]*m, u[2]*m, 0.0m)
return __integrand_E(r̄′; source=source, media=media,
r̄=CoordinateCartesian(r̄), t=t)::SVector{3,T}
end
# Get integration limits
(lim_min_x_m, lim_max_x_m) = ustrip.(T, m, source.xlims)
(lim_min_y_m, lim_max_y_m) = ustrip.(T, m, source.ylims)
lb = [lim_min_x_m, lim_min_y_m]
ub = [lim_max_x_m, lim_max_y_m]
# Define and solve the integral problem over rectangular aperture
prob = IntegralProblem(integrand_Vm3, lb, ub)
sol = solve(prob, HCubatureJL(), reltol=rtol) # implied units [V/m³ * m²] -> [V/m]
return ( (1/4π) .* (sol.u) .* (V/m) )
end
###########################################################################
# VOLUME SOURCES
###########################################################################
function __E(r̄::AbstractCoordinate, t::Unitful.Time, source::VolumeSource_Cylinder{T},
media::PropagationMedia_Simple; rtol=__DEFAULT_RTOL) where {T<:AbstractFloat}
error("Solver not yet implemented.")
end
function __E(r̄::AbstractCoordinate, t::Unitful.Time, source::VolumeSource_Rectangular{T},
media::PropagationMedia_Simple; rtol=__DEFAULT_RTOL) where {T<:AbstractFloat}
function integrand_Vm4(u, p)::SVector{3,T}
r̄′ = CoordinateCartesian(u[1]*m, u[2]*m, u[3]*m)
return __integrand_E(r̄′; source=source, media=media,
r̄=CoordinateCartesian(r̄), t=t)::SVector{3,T}
end
# Get integration limits
(lim_min_x_m, lim_max_x_m) = ustrip.(T, m, source.xlims)
(lim_min_y_m, lim_max_y_m) = ustrip.(T, m, source.ylims)
(lim_min_z_m, lim_max_z_m) = ustrip.(T, m, source.zlims)
lb = [lim_min_x_m, lim_min_y_m, lim_min_z_m]
ub = [lim_max_x_m, lim_max_y_m, lim_max_z_m]
# Define and solve the integral problem over rectangular aperture
prob = IntegralProblem(integrand_Vm4, lb, ub)
sol = solve(prob, HCubatureJL(), reltol=rtol) # implied units [V/m⁴ * m³] -> [V/m]
return ( (1/4π) .* (sol.u) .* (V/m) )
end
function __E(r̄::AbstractCoordinate, t::Unitful.Time, source::VolumeSource_Sphere{T},
media::PropagationMedia_Simple; rtol=__DEFAULT_RTOL) where {T<:AbstractFloat}
error("Solver not yet implemented.")
end
| JefimenkoModels | https://github.com/mikeingold/JefimenkoModels.jl.git |
|
[
"MIT"
] | 0.5.0 | a8c5cb40f6eb86a4cc506329aebd245f186202df | code | 5824 | ###########################################################################
# LINEAR SOURCES
###########################################################################
"""
__H(r̄::AbstractCoordinate, t::Time, source::JefimenkoSource, media::PropagationMedia; rtol)
Calculate the magnetic field at (`r̄`,`t`) using the electric Jefimenko equation due to a
particular `source`, transmitted through a particular homogeneous `propagation media`.
Calculate the integral using a specified `relative tolerance`.
# Arguments
- `r̄::UnitfulCoordinateSystems.AbstractCoordinate`: spatial location of the observation point
- `t::Unitful.Time`: time at which the magnetic field is observed
- `source::JefimenkoSource`: source of the magnetic field
- `media::PropagationMedia`: properties of the propagation media
# Keywords
- `rtol::Real`: relative tolerance at which to solve the integral (optional)
"""
function __H(r̄::AbstractCoordinate, t::Unitful.Time, source::LineSource_Straight{T},
media::PropagationMedia_Simple; rtol=__DEFAULT_RTOL) where {T<:AbstractFloat}
# Calculate the length of the line source from starting point ā to ending point b̄
dmax::Unitful.Length = norm(source.b̄ - source.ā)
# Calculate the integrand H-field vector in implied units [A/m²]
function integrand_Am2(u, p)::SVector{3,T}
d::Unitful.Length = u * m
# Parameterize a straight line from ā to b̄ according to the distance traveled
# Start at ā, progress the specified distance in direction û
û = (source.b̄ - source.ā) ./ dmax
r̄′::CoordinateCartesian = source.ā + (d .* û)
return __integrand_H(r̄′; source=source, media=media,
r̄=CoordinateCartesian(r̄), t=t)::SVector{3,T}
end
# Define the integrand as a f(d) traveled along line source, solve it
prob = IntegralProblem(integrand_Am2, zero(T), ustrip(T,m,dmax))
sol = solve(prob, QuadGKJL(), reltol=rtol) # implied units [A/m² * m] -> [A/m]
return ( (1/4π) .* (sol.u) .* (A/m) )
end
###########################################################################
# SURFACE SOURCES
###########################################################################
function __H(r̄::AbstractCoordinate, t::Unitful.Time, source::SurfaceSource_Disk{T},
media::PropagationMedia_Simple; rtol=__DEFAULT_RTOL) where {T<:AbstractFloat}
function disk_integrand_Am2(u, p)::SVector{3,T}
# Convert given (ρ[m],φ[rad]) to a Coordinate
r̄′ = CoordinateCartesian(CoordinatePolar(u[1]*m, u[2]*rad))
# Return integrand scaled by the radial integration factor,
return (__integrand_H(r̄′; source=source, media=media,
r̄=CoordinateCartesian(r̄), t=t) .* u[1])::SVector{3,T}
end
# Get integration limits: ρ ∈ [0,ρ₀], ϕ ∈ [0,2π]
ρ₀_m = ustrip(T, m, source.ρ₀)
lb = [zero(T), zero(T)]
ub = [T(ρ₀_m), T(2π)]
# Define and solve the integral problem over a circular aperture
prob = IntegralProblem(disk_integrand_Am2, lb, ub)
sol = solve(prob, HCubatureJL(), reltol=rtol) # implied units [A/m² * m] -> [A/m]
return ( (1/4π) .* (sol.u) .* (A/m) )
end
function __H(r̄::AbstractCoordinate, t::Unitful.Time, source::SurfaceSource_Rectangle{T},
media::PropagationMedia_Simple; rtol=__DEFAULT_RTOL) where {T<:AbstractFloat}
function integrand_Am3(u, p)::SVector{3,T}
r̄′ = CoordinateCartesian(u[1]*m, u[2]*m, 0.0m)
return __integrand_H(r̄′; source=source, media=media,
r̄=CoordinateCartesian(r̄), t=t)::SVector{3,T}
end
# Get integration limits
(lim_min_x_m, lim_max_x_m) = ustrip.(T, m, source.xlims)
(lim_min_y_m, lim_max_y_m) = ustrip.(T, m, source.ylims)
lb = [lim_min_x_m, lim_min_y_m]
ub = [lim_max_x_m, lim_max_y_m]
# Define and solve the integral problem over rectangular aperture
prob = IntegralProblem(integrand_Am3, lb, ub)
sol = solve(prob, HCubatureJL(), reltol=rtol) # implied units [A/m³ * m²] -> [A/m]
return ( (1/4π) .* (sol.u) .* (A/m) )
end
###########################################################################
# VOLUME SOURCES
###########################################################################
function __H(r̄::AbstractCoordinate, t::Unitful.Time, source::VolumeSource_Cylinder{T},
media::PropagationMedia_Simple; rtol=__DEFAULT_RTOL) where {T<:AbstractFloat}
error("Solver not yet implemented.")
end
function __H(r̄::AbstractCoordinate, t::Unitful.Time, source::VolumeSource_Rectangular{T},
media::PropagationMedia_Simple; rtol=__DEFAULT_RTOL) where {T<:AbstractFloat}
function integrand_Am4(u, p)::SVector{3,T}
r̄′ = CoordinateCartesian(u[1]*m, u[2]*m, u[3]*m)
return __integrand_H(r̄′; source=source, media=media,
r̄=CoordinateCartesian(r̄), t=t)::SVector{3,T}
end
# Get integration limits
(lim_min_x_m, lim_max_x_m) = ustrip.(T, m, source.xlims)
(lim_min_y_m, lim_max_y_m) = ustrip.(T, m, source.ylims)
(lim_min_z_m, lim_max_z_m) = ustrip.(T, m, source.zlims)
lb = [lim_min_x_m, lim_min_y_m, lim_min_z_m]
ub = [lim_max_x_m, lim_max_y_m, lim_max_z_m]
# Define and solve the integral problem over rectangular aperture
prob = IntegralProblem(integrand_Am4, lb, ub)
sol = solve(prob, HCubatureJL(), reltol=rtol) # implied units [A/m⁴ * m³] -> [A/m]
return ( (1/4π) .* (sol.u) .* (A/m) )
end
function __H(r̄::AbstractCoordinate, t::Unitful.Time, source::VolumeSource_Sphere{T},
media::PropagationMedia_Simple; rtol=__DEFAULT_RTOL) where {T<:AbstractFloat}
error("Solver not yet implemented.")
end
| JefimenkoModels | https://github.com/mikeingold/JefimenkoModels.jl.git |
|
[
"MIT"
] | 0.5.0 | a8c5cb40f6eb86a4cc506329aebd245f186202df | code | 5226 | """
__integrand_E(r̄′::CoordinateCartesian; r̄::CoordinateCartesian, t::Time,
source::AbstractJefimenkoSource{T}, media::PropagationMedia)::SVector{3,T}
where {T<:AbstractFloat}
Calculate the integrand function for the electric Jefimenko equation at the `source point
r̄′`. Parameterize the integrand function according to a particular `field source`,
`propagation media`, and for an observer positioned at space-time point (`r̄`,`t`).
# Arguments
- `r̄′::UnitfulCoordinateSystems.CoordinateCartesian`: coordinate of the source point
# Parameters
- `r̄::UnitfulCoordinateSystems.CoordinateCartesian`: coordinate of the observation point
- `t::Unitful.Time`: time at the observation point
- `source::JefimenkoSource`: the source model generating the electric field
- `media::PropagationMedia_Simple`: properties of the propagation media
# Returns
- `SVector{3,T}`: the predicted vector-valued integrand value
"""
function __integrand_E(r̄′::CoordinateCartesian; r̄::CoordinateCartesian, t::Unitful.Time,
source::AbstractJefimenkoSource{T}, media::PropagationMedia_Simple
)::SVector{3,T} where {T<:AbstractFloat}
# Get spatial properties, in implicit units of meters
Δr̄_m::SVector{3,T} = ustrip.(T, m, SVector(r̄ - r̄′)) # vector r̄-r̄′
r_m::T = norm(Δr̄_m) # magnitude |r̄-r̄′|
# Get media properties, in implicit units as specified
c::T = ustrip(T, m/s, media.c) # speed of light
ε::T = ustrip(T, A*s/(V*m), media.ε) # permittivity
# Calculate source-observer retarded time, in implicit units of seconds
t′_s::T = ustrip(T, s, t′(r̄,t,r̄′,media))
# Evaluate source function aliases, in implicit units as specified
ρₑ::T = source.ρₑ(r̄′, t′_s)
∂ρₑ_∂t::T = ForwardDiff.derivative(t_s -> source.ρₑ(r̄′,t_s), t′_s)
∂Jₑ_∂t::SVector{3,T} = ForwardDiff.derivative(t_s -> source.Jₑ(r̄′,t_s), t′_s)
Jₕ::SVector{3,T} = source.Jₕ(r̄′, t′_s)
∂Jₕ_∂t::SVector{3,T} = ForwardDiff.derivative(t_s -> source.Jₕ(r̄′,t_s), t′_s)
# Calculate first term [V/m²]
term1::SVector{3,T} = (ε^-1) .* (
((Δr̄_m ./ r_m^3) .* ρₑ)
+ ((Δr̄_m ./ r_m^2) .* (c^-1) .* ∂ρₑ_∂t)
- ((1 / r_m) .* (c^-2) .* ∂Jₑ_∂t)
)
# Calculate second term [V/m²]
term2::SVector{3,T} = LinearAlgebra.cross(((Jₕ ./ r_m^3) + ((1 / r_m^2) .* (c^-1) .* ∂Jₕ_∂t)), Δr̄_m)
return (term1 - term2)
end
"""
__integrand_H(r̄′::CoordinateCartesian; r̄::CoordinateCartesian, t::Time,
source::AbstractJefimenkoSource{T}, media::PropagationMedia)::SVector{3,T}
where {T<:AbstractFloat}
Calculate the integrand function for the magnetic Jefimenko equation at the `source point
r̄′`. Parameterize the integrand function according to a particular `field source`,
`propagation media`, and for an observer positioned at space-time point (`r̄`,`t`).
# Arguments
- `r̄′::UnitfulCoordinateSystems.CoordinateCartesian`: coordinate of the source point
# Parameters
- `r̄::UnitfulCoordinateSystems.CoordinateCartesian`: coordinate of the observation point
- `t::Unitful.Time`: time at the observation point
- `source::JefimenkoSource`: the source model generating the magnetic field
- `media::PropagationMedia_Simple`: properties of the propagation media
# Returns
- `SVector{3,T}`: the predicted vector-valued integrand value
"""
function __integrand_H(r̄′::CoordinateCartesian; r̄::CoordinateCartesian, t::Unitful.Time,
source::AbstractJefimenkoSource{T}, media::PropagationMedia_Simple
) where {T<:AbstractFloat}
# Get spatial properties, in implicit units of meters
Δr̄_m::SVector{3,T} = ustrip.(T, m, SVector(r̄ - r̄′)) # vector r̄-r̄′
r_m::T = norm(Δr̄_m) # magnitude |r̄-r̄′|
# Get media properties, in implicit units as specified
c::T = ustrip(T, m/s, media.c) # speed of light in [m/s]
μ::T = ustrip(T, (V*s)/(A*m), media.μ) # permeability in [Vs/Am]
# Calculate source-observer retarded time, in implicit units of seconds
t′_s::T = ustrip(T, s, t′(r̄,t,r̄′,media))
# Source functions
ρₕ::T = source.ρₕ(r̄′, t′_s)
∂ρₕ_∂t::T = ForwardDiff.derivative(t_s -> source.ρₕ(r̄′,t_s), t′_s)
Jₑ::SVector{3,T} = source.Jₑ(r̄′, t′_s)
∂Jₑ_∂t::SVector{3,T} = ForwardDiff.derivative(t_s -> source.Jₑ(r̄′,t_s), t′_s)
∂Jₕ_∂t::SVector{3,T} = ForwardDiff.derivative(t_s -> source.Jₕ(r̄′,t_s), t′_s)
# Calculate first term
term1::SVector{3,T} = (μ^-1) .* (
((Δr̄_m ./ r_m^3) .* ρₕ)
+ ((Δr̄_m ./ r_m^2) .* (c^-1) .* ∂ρₕ_∂t)
- ((1 / r_m) .* (c^-2) .* ∂Jₕ_∂t)
)
# Calculate second term
term2::SVector{3,T} = LinearAlgebra.cross((Jₑ ./ r_m^3) + ((1 / r_m^2) .* (c^-1) .* ∂Jₑ_∂t), Δr̄_m)
return (term1 + term2)
end
| JefimenkoModels | https://github.com/mikeingold/JefimenkoModels.jl.git |
|
[
"MIT"
] | 0.5.0 | a8c5cb40f6eb86a4cc506329aebd245f186202df | code | 4235 | ################################################################################
# JEFIMENKOMODEL SOURCES
################################################################################
# Type T defines the data type used for calculation, typically <: AbstractFloat
abstract type AbstractJefimenkoSource{T} end
############################################################################
# VOLUME SOURCES
############################################################################
abstract type AbstractVolumeSource{T} <: AbstractJefimenkoSource{T} end
struct VolumeSource_Rectangular{T} <: AbstractVolumeSource{T}
xlims::Tuple{Unitful.Length, Unitful.Length}
ylims::Tuple{Unitful.Length, Unitful.Length}
zlims::Tuple{Unitful.Length, Unitful.Length}
rho_e::Function
rho_h::Function
J_e::Function
J_h::Function
end
struct VolumeSource_Cylinder{T} <: AbstractVolumeSource{T}
r::Tuple{Unitful.Length, Unitful.Length}
philims::Tuple{Unitful.Length, Unitful.Length}
zlims::Tuple{Unitful.Length, Unitful.Length}
rho_e::Function
rho_h::Function
J_e::Function
J_h::Function
end
struct VolumeSource_Sphere{T} <: AbstractVolumeSource{T}
r::Tuple{Unitful.Length, Unitful.Length}
thetalims::Tuple{Unitful.Length, Unitful.Length}
philims::Tuple{Unitful.Length, Unitful.Length}
rho_e::Function
rho_h::Function
J_e::Function
J_h::Function
end
export VolumeSource_Rectangular, VolumeSource_Cylinder, VolumeSource_Sphere
############################################################################
# SURFACE SOURCES
############################################################################
abstract type AbstractSurfaceSource{T} <: AbstractJefimenkoSource{T} end
struct SurfaceSource_Rectangle{T} <: AbstractSurfaceSource{T}
xlims::Tuple{Unitful.Length, Unitful.Length}
ylims::Tuple{Unitful.Length, Unitful.Length}
rho_e::Function
rho_h::Function
J_e::Function
J_h::Function
end
struct SurfaceSource_Disk{T} <: AbstractSurfaceSource{T}
r::Unitful.Length
rho_e::Function
rho_h::Function
J_e::Function
J_h::Function
end
export SurfaceSource_Rectangle, SurfaceSource_Disk
############################################################################
# LINE SOURCES
############################################################################
abstract type AbstractLineSource{T} <: AbstractJefimenkoSource{T} end
struct LineSource_Straight{T} <: AbstractLineSource{T}
a::AbstractCoordinate
b::AbstractCoordinate
rho_e::Function
rho_h::Function
J_e::Function
J_h::Function
end
export LineSource_Straight
################################################################################
# PROPAGATION MEDIA
################################################################################
abstract type AbstractPropagationMedia end
struct PropagationMedia_Simple <: AbstractPropagationMedia
epsilon::Quantity
mu::Quantity
c::Quantity
end
struct PropagationMedia_DiagonallyAnisotropic <: AbstractPropagationMedia
epsilon::Diagonal{Quantity}
mu::Diagonal{Quantity}
c::Diagonal{Quantity}
end
export PropagationMedia_Simple, PropagationMedia_DiagonallyAnisotropic
################################################################################
# JEFIMENKO MODELS
################################################################################
struct JefimenkoModel{T}
media::AbstractPropagationMedia
sources::Vector{AbstractJefimenkoSource{T}}
metadata::Dict{Symbol,Any}
end
export JefimenkoModel | JefimenkoModels | https://github.com/mikeingold/JefimenkoModels.jl.git |
|
[
"MIT"
] | 0.5.0 | a8c5cb40f6eb86a4cc506329aebd245f186202df | docs | 2232 | # JefimenkoModels.jl
[](https://mikeingold.github.io/JefimenkoModels.jl/dev/)
`JefimenkoModels.jl` is a time-domain solver for the electromagnetic near-fields produced by
an arbitrary distribution of charges and currents, both electric and magnetic. This solver
implements a generalized version of the Jefimenko equations that enables the consideration of
magnetic charges and currents, which are often a useful analytical tool in electromagnetics
modeling. The solution process operates purely in the time-domain, enabling the study of
wideband sources without artifacts caused by frequency-domain analysis and with reduced
memory usage compared to FDTD methods.
This package leverages the
[UnitfulCoordinateSystems.jl](https://github.com/mikeingold/UnitfulCoordinateSystems.jl)
package to provide a handy and performant way to deal with `Unitful` coordinate data.
## Status
This package remains in development status. Multiple dispatch is used to select the solver
method appropriate for a particular source type. The implementation status of these methods
is detailed in the following table.
| Solver Method | Implemented | Tested |
|:---|:---:|:---:|
| `LineSource_Straight` | :white_check_mark: | :white_check_mark: |
| `SurfaceSource_Disk` | :white_check_mark: | :white_check_mark: |
| `SurfaceSource_Rectangle` | :white_check_mark: | :white_check_mark: |
| `VolumeSource_Rectangular` | :white_check_mark: | :white_check_mark: |
| `VolumeSource_Cylinder` | :x: | :x: |
| `VolumeSource_Sphere` | :x: | :x: |
The `LineSource_Straight` solver methods have been validated against a major commercial
software package's Method of Moments (MoM) solver for electric current line sources. For a
single-frequency (CW) source signal, `JefimenkoModels` produced identical results as the
competitor MoM solver. However, when the source signal was defined as a wideband transient
pulse, the `JefimenkoModels` solver was substantially faster and more accurate: the MoM
solver uses a discrete-frequency-domain transfer function that introduces artifacts/error
when solving for wideband signals.
| JefimenkoModels | https://github.com/mikeingold/JefimenkoModels.jl.git |
|
[
"MIT"
] | 0.5.0 | a8c5cb40f6eb86a4cc506329aebd245f186202df | docs | 1661 | # TODO
## Short Term
- Should __DEFAULT_RTOL be defined as a function __DEFAULT_RTOL(T)?
- Implement custom `Base.show` pretty-printing for structs
- Either remove Unicode from struct field naming OR provide non-Unicode accessors
- Benchmark a 3D volume source against JefiGPU
## Medium Term
- Make better documentation for source function definitions (units, types, etc)
- Create a test/inspect/validate function for users to look for issues in their definitions?
- Address type stability of source functions by extend struct parameterization
- [Reference to conversation on Julia Zulip](https://julialang.zulipchat.com/#narrow/stream/225542-helpdesk/topic/.E2.9C.94.20High.20GC.20Time.20in.20HCubature/near/323730178)
- Develop/document constructor methods for sources and models
- Implement solvers for
- VolumeSource_Cylinder
- VolumeSource_Sphere
## Longer-Term Vision
- Add a CITATION.bib
- Re-assess the need for solver type parameterization
- Does it even work as intended?
- Is there a performance benefit?
- Evaluate whether Automatic Differentiation can be made to operate through solutions
- Consider permitting sources to have a variable center/orientation
- Consider consolidating the integrand functions using ComponentArray-parameterized source values
- This would add complexity to the E/H functions, but would reduce code duplication here
- The main current difference between R1/R2/R3 is in commented dimensional analysis
- If Unitful evaluation is a serious performance penalty, then R1/R2/R3 could ustrip source
values into implied units and then call a consolidated/abstract integrand function
| JefimenkoModels | https://github.com/mikeingold/JefimenkoModels.jl.git |
|
[
"MIT"
] | 0.5.0 | a8c5cb40f6eb86a4cc506329aebd245f186202df | docs | 600 | # JefimenkoModels.jl
This package is a time-domain solver for the electromagnetic near-fields produced by
an arbitrary distribution of charges and currents, both electric and magnetic. This solver
implements a generalized version of the Jefimenko equations that enables the consideration of
magnetic charges and currents, which are often a useful analytical tool in electromagnetics
modeling. The solution process operates purely in the time-domain, enabling the study of
wideband sources without artifacts caused by frequency-domain analysis and with reduced
memory usage compared to FDTD methods.
| JefimenkoModels | https://github.com/mikeingold/JefimenkoModels.jl.git |
|
[
"MIT"
] | 0.5.0 | a8c5cb40f6eb86a4cc506329aebd245f186202df | docs | 318 | # Reference
## Public API
The following functions are exported by the `JefimenkoModels` package.
```@docs
JefimenkoModels.E
JefimenkoModels.H
JefimenkoModels.P
```
## Internal API
```@docs
JefimenkoModels.__E
JefimenkoModels.__integrand_E
JefimenkoModels.__H
JefimenkoModels.__integrand_H
JefimenkoModels.__P
```
| JefimenkoModels | https://github.com/mikeingold/JefimenkoModels.jl.git |
|
[
"MIT"
] | 0.5.0 | a8c5cb40f6eb86a4cc506329aebd245f186202df | docs | 6834 | # Tutorial
## Define a propagation media
The propagation media of a model is assumed to be linear, time-invariant, and
spatially-homogeneous.
When designing a model that will propagate in vacuum (free space), a pre-defined media is
provided.
```julia
media::PropagationMedia_Simple = JefimenkoModels.CLASSICAL_VACUUM
```
Alternatively, a propagation media with Real-valued permittivity (``\varepsilon``) and
permeability (``\mu``) can be specified using `Unitful` values. Each term can be defined in
your preferred choice of units, so long as they are dimensionally-equivalent to the reference
units: ``\varepsilon`` in [F/m] or [As/Vm], and $\mu$ in [N/A``^2``] or [Vs/Am].
```julia
epsilon = 8.854_188e-15 * u"(kA*s)/(V*m)"
mu = 1.256_637e-3 * u"(mV*s)/(A*m)"
c = 2.997_925e5 * u"km/s"
PropagationMedia_Simple(epsilon, mu, c)
```
## Define a source
When any of the source components is neglected, e.g. a source with only currents (``J``) or
charges (``\rho``), a pair of pre-defined null sources are provided for convenience. The
`JefimenkoModels.NULL_CHARGE` function can be used in place of either ``\rho(\bar{r},t)``
function, and the `JefimenkoModels.NULL_CURRENT` function can be used in place of either
``J(\bar{r},t)`` function.
In the current version of `JefimenkoModels`, source charge and current functions must be
defined in a specific format. The functions should take two arguments: a
`UnitfulCoordinateSystem.AbstractCoordinate` indicating the spatial position evaluated, and
the `Real`-typed time in implied units of seconds. The functions should return a Real-valued
number with implied units according to the following tables.
An update is planned that will enable `Unitful` time argument and return types. This will
hopefully simplify the source design process and identify potential dimensional errors.
**Table: Line Source Functions**
| Function | Arg 1 | Arg 2 [Units] | Return Type [Units] |
|---|---|---|---|
| Electric charge density ``\rho_e(\bar{r},t)`` | `r::AbstractCoordinate` | `t::Real` [s] | `::Real` [C/m] |
| Magnetic charge density ``\rho_h(\bar{r},t)`` | `r::AbstractCoordinate` | `t::Real` [s] | `::Real` [Wb/m] |
| Electric current density ``\bar{J_e}(\bar{r},t)`` | `r::AbstractCoordinate` | `t::Real` [s] | `::SVector{3,Real}` [A] |
| Magnetic current density ``\bar{J_h}(\bar{r},t)`` | `r::AbstractCoordinate` | `t::Real` [s] | `::SVector{3,Real}` [V] |
**Table: Surface Source Functions**
| Function | Arg 1 | Arg 2 [Units] | Return Type [Units] |
|---|---|---|---|
| Electric charge density ``\rho_e(\bar{r},t)`` | `r::AbstractCoordinate` | `t::Real` [s] | `::Real` [C/m``^2``] |
| Magnetic charge density ``\rho_h(\bar{r},t)`` | `r::AbstractCoordinate` | `t::Real` [s] | `::Real` [Wb/m``^2``] |
| Electric current density ``\bar{J_e}(\bar{r},t)`` | `r::AbstractCoordinate` | `t::Real` [s] | `::SVector{3,Real}` [A/m] |
| Magnetic current density ``\bar{J_h}(\bar{r},t)`` | `r::AbstractCoordinate` | `t::Real` [s] | `::SVector{3,Real}` [V/m] |
**Table: Volume Source Functions**
| Function | Arg 1 | Arg 2 [Units] | Return Type [Units] |
|---|---|---|---|
| Electric charge density ``\rho_e(\bar{r},t)`` | `r::AbstractCoordinate` | `t::Real` [s] | `::Real` [C/m``^3``] |
| Magnetic charge density ``\rho_h(\bar{r},t)`` | `r::AbstractCoordinate` | `t::Real` [s] | `::Real` [Wb/m``^3``] |
| Electric current density ``\bar{J_e}(\bar{r},t)`` | `r::AbstractCoordinate` | `t::Real` [s] | `::SVector{3,Real}` [A``^2``] |
| Magnetic current density ``\bar{J_h}(\bar{r},t)`` | `r::AbstractCoordinate` | `t::Real` [s] | `::SVector{3,Real}` [V``^2``] |
## Construct a model
`JefimenkoModel`s have a `metadata::Dict` provision. This dictionary is not currently used
by the solver. Rather, it provides the user with a convenient place to store any desired
metadata.
The following example produces a `JefimenkoModel` with a single one-meter line source on
the x-axis. This source is characterized by a spatially-uniform continuous wave (CW) electric
current.
```julia
using JefimenkoModels
using Unitful, UnitfulCoordinateSystems
using Unitful.DefaultSymbols: m, ns
model_line = let
# Single line source on x-axis from -0.5m to +0.5m
# Electric current only: spatially-uniform, x-directed, driven by 100 MHz CW sinusoid
a = CoordinateCartesian(-0.5m, 0.0m, 0.0m)
b = CoordinateCartesian( 0.5m, 0.0m, 0.0m)
Je(r̄::AbstractCoordinate, t_s::Real) = x̂ .* cos(2π*100e6*t_s) # t in s -> Je in A
source = LineSource_Straight{Float64}(a, b, NULL_CHARGE, NULL_CHARGE, Je, NULL_CURRENT)
metadata = Dict(:name => "Tutorial Example",
:charges => "None",
:currents => "Electric-Only"
:spatial_distribution => "Uniform",
:source_length => 1.0m,
:signal_type => "100 MHz CW")
JefimenkoModel{Float64}(CLASSICAL_VACUUM, [source], metadata)
end
```
The following example produces a `JefimenkoModel` for a one-meter diameter aperture source on
the xy-plane and centered on the origin. This source is characterized by a spatially-uniform
electric current and driven by a wideband transient pulse.
```julia
using JefimenkoModels
using Unitful, UnitfulCoordinateSystems
model_disk = let
# Surface disk source with radius 0.5m
# Electric current only: spatially-uniform, x-directed, driven by a transient pulse
ρ₀ = 0.5m
(t₀_s, f₀_Hz, β₀) = (5.0e-9, 500e6, 1.25)
sig(t_s::Real) = sin(2π*f₀_Hz*t_s) * exp(-β₀*(f₀_Hz*t_s)^2)
Je(r̄::AbstractCoordinate, t_s::Real) = x̂ .* sig(t_s-t₀_s) # t in s -> Jₑ in A
source = SurfaceSource_Disk{Float64}(ρ₀, NULL_CHARGE, NULL_CHARGE, Je, NULL_CURRENT)
metadata = Dict(:description=>"Uniform current over a 0.5m disk, stimulated by transient pulse signal.")
JefimenkoModel{Float64}(CLASSICAL_VACUUM, [source], metadata)
end
```
## Calculate the electromagnetic fields
The electromagnetic near-fields produced by the aperture source described above can be
calculated by specifying an observation point and the desired time-domain.
```julia
# Observation location and time domain of interest
r = CoordinateCartesian(0.0m, 0.0m, 1.5m)
t = range(0.0ns, 20.0ns, length=800)
# Calculate the fields at r over the time domain
efield = map(t -> E(r,t,model_disk), t)
hfield = map(t -> H(r,t,model_disk), t)
```
Inspecting the data on this specified time-domain, the source electric current density
(spatially-uniform across the 1 meter diameter aperture) is

The electric field measured at the observation point is

And the magnetic field measured at the observation point is

| JefimenkoModels | https://github.com/mikeingold/JefimenkoModels.jl.git |
|
[
"MIT"
] | 0.1.0 | 7485c4097d647612a24cbb2ca26bee76c37ae508 | code | 7670 | module AUCell
using Random, DelimitedFiles, SparseArrays, HypothesisTests, LinearAlgebra
export pathway_AUC_main,
reAUCluster_kernel, aucell_kernel,
filter_expr_matrix,
read_mtx, read_gmt, read_meta
include("code/read_file.jl")
include("code/process_expr.jl")
include("code/auc_pathway_recluster.jl")
include("code/aucell.jl")
"""
# Examples
## default: reAUCluster mode
```jldoctest
julia> pathway_AUC_main(use_testdata = "yes")
1.632442 seconds (8.44 M allocations: 279.642 MiB, 7.95% gc time, 73.87% compilation time)
[ Info: INFO: The size of expression profile was (36602, 8).
1.779532 seconds (4.95 M allocations: 260.557 MiB, 4.14% gc time, 97.91% compilation time)
[ Info: INFO: The filtered of expression profile size was (7549, 8).
0.000320 seconds (27 allocations: 34.672 KiB)
[ Info: INFO: There are 1 pathways to be analyzed.
0.768511 seconds (1.50 M allocations: 99.943 MiB, 2.46% gc time, 95.16% compilation time)
2×5 Matrix{Any}:
"pathways_name" ["cluster1"] … ["t"] ["pvalue"]
"HALLMARK_TNFA_SIGNALING_VIA_NFKB" Any["AAACCCAAGGGTTAAT-1", "AAACCCAAGAAACCAT-1", "AAACCCAAGCAACAAT-1", "AAACCCAAGCCAGAGT-1", "AAACCCACAGCAGATG-1"] [4.92654] [0.00263937]
```
## aucell mode
```jldoctest
julia> pathway_AUC_main(use_testdata = "yes", mode = "aucell")
1.557316 seconds (8.44 M allocations: 279.659 MiB, 3.27% gc time, 78.85% compilation time)
[ Info: INFO: The size of expression profile was (36602, 8).
1.771720 seconds (4.95 M allocations: 260.557 MiB, 3.69% gc time, 97.39% compilation time)
[ Info: INFO: The filtered of expression profile size was (7549, 8).
0.000329 seconds (27 allocations: 34.672 KiB)
[ Info: INFO: There are 1 pathways to be analyzed.
0.667055 seconds (1.75 M allocations: 87.598 MiB, 3.82% gc time, 99.79% compilation time)
[ Info: INFO: According to the meta information, there are 8 groups of data and each group will be analyzed with the rest of the sample.
3.153389 seconds (6.62 M allocations: 421.960 MiB, 3.39% gc time, 80.62% compilation time)
2×65 Matrix{Any}:
"GeneSet" "AAACCCAAGAAACCAT-1" "AAACCCAAGAAACCAT-1" "AAACCCAAGAAACCAT-1" … "AAACCCAGTACGGGAT-1" "AAACCCAGTACGGGAT-1" "AAACCCAGTACGGGAT-1"
"HALLMARK_TNFA_SIGNALING_VIA_NFKB" 0.506962 0.500821 0.515332 0.512858 0.482078 0.440029
```
"""
function pathway_AUC_main(fn_expr::AbstractString = "matrix.mtx",
rn_expr::AbstractString = "features.tsv",
cn_expr::AbstractString = "barcodes.tsv",
fn_feature::AbstractString = "fn_feature.gmt",
fn_meta::AbstractString = "fn_meta.txt";
fn_meta_delim::AbstractChar = '\t',
fn_meta_group::AbstractString = "group",
file_format_expr::AbstractString = "read_mtx", # There are two input modes "read_mtx" and "read_expr_matrix" for the expression profile file format.
T::Type = Int32,
feature_col::Int = 2,
barcode_col::Int = 1,
rem_delim::AbstractChar = ' ',
feature_threshold::Int = 30, # Include features (genes) detected in at least this many cells
cell_threshold::Int = 200, # Include profiles (cells) where at least this many features are detected
file_format_feature::AbstractString = "read_gmt", # There are two input modes "read_gmt" and "read_gsf" for the file format of the features contained in the pathways.
fn_feature_delim::AbstractChar = ' ',
# use_HALLMARK_pathway::AbstractString = "no",
mode::AbstractString = "reAUCluster", # "reAUCluster" is subgroups based on pathway activation. "aucell" is an optional mode to calculate AUC based on characteristic genes for two groups.
ncell_pseudo::Int = 0, # ncell_pseudo is the number of pseudobulk combined cells in each group
auc_x_threshold::Float64 = 1.0,
remove_zeros::Bool = true,
use_testdata::AbstractString = "no",
work_dir::AbstractString = "./")
cd(work_dir)
if use_testdata == "yes"
fn_expr = joinpath(@__DIR__, "..", "test", "matrix.mtx")
rn_expr = joinpath(@__DIR__, "..", "test", "features.tsv")
cn_expr = joinpath(@__DIR__, "..", "test", "barcodes.tsv")
fn_feature = joinpath(@__DIR__, "..", "test", "fn_feature.gmt")
fn_meta = joinpath(@__DIR__, "..", "test", "fn_meta.txt")
feature_threshold = 1
cell_threshold = 1
end
# (use_HALLMARK_pathway == "yes") ? fn_feature = joinpath(@__DIR__, "..", "HALLMARK_pathway", "h_all_v2023_1_Hs_symbols.gmt") : fn_feature
@time mat, fea, bar = (file_format_expr == "read_mtx") ? read_mtx(fn_expr, rn_expr, cn_expr; T, feature_col, barcode_col) : read_expr_matrix(fn_expr, rn_expr, cn_expr; matrix_delim = rem_delim)
@info "INFO: The size of expression profile was $(size(mat))."
@time mat, kf, kb = filter_expr_matrix(mat, feature_threshold, cell_threshold)
@info "INFO: The filtered of expression profile size was $(size(mat))."
fea = fea[kf]
bar = bar[kb]
@time pathway_name, pathway_genes = (file_format_feature == "read_gmt") ? read_gmt(fn_feature) : read_gsf(fn_feature; delim = fn_feature_delim)
@info "INFO: There are $(length(pathway_name)) pathways to be analyzed."
if mode == "reAUCluster"
# 重分簇的结果,第一列为通路,第二列为重分簇1,第三列为重分簇2,,第四列为配对t检验t值,第五列为配对t检验p值,第六列为各样本的AUC值(顺序按照表达谱样本顺序)
recluster_bar = reAUCluster_kernel(mat, fea, bar, pathway_genes; np = ncell_pseudo, remove_zeros = remove_zeros)
# 存储重分簇的结果,第一列为通路,第二列为重分簇1,第三列为重分簇2,第四列为配对t检验t值,第五列为配对t检验p值
recluster_bar = vcat(hcat("pathways_name",[[["cluster1"]] [["cluster2"]] [["t"]] [["pvalue"]] [["sample1","sample2","sample3"]]]),hcat(pathway_name,recluster_bar))
writedlm("reAUCluster_result.tsv", recluster_bar[:,1:5], "\t")
#存储各样本在各通路中的AUC,行为通路,列为样本
writedlm("reAUCluster_AUC.tsv", recluster_bar[:,[:,setdiff([1:end]...,[2:end-1]...)]], "\t")
return recluster_bar[:,1:5]
else
@time grp, nam = read_meta(fn_meta, fn_meta_group; delim = fn_meta_delim)
@info "INFO: According to the meta information, there are $(length(grp)) groups of data and each group will be analyzed with the rest of the sample."
result = []
cluster = []
#writedlm("pathway_names.tsv", pathway_name, '\t')
@time for i in 1:length(nam)
profiles = mat[:, indexin(grp[i], bar)]
res = aucell_kernel(mat, fea, pathway_genes, np = ncell_pseudo, x_threshold = auc_x_threshold, remove_zeros = remove_zeros)
_, c =size(res)
push!(cluster, fill(nam[i],c))
#writedlm(nam[i]*"_hallmark.tsv", res, '\t')
push!(result, res)
end
#/ using BSON: @save
#/ comb = (result, cluster, pathway_name)
#/ @save "aucell_result.bson" comb
result = hcat(pathway_name, Matrix{Any}(reduce(hcat, result)))
cluster = vcat(["GeneSet"], reduce(vcat, cluster))
# Vectory转Matrix
result = vcat(reshape(cluster, 1, :), result)
writedlm("aucell_result.tsv", result, '\t')
return result
end
end
end
| AUCell | https://github.com/yanjer/AUCell.jl.git |
|
[
"MIT"
] | 0.1.0 | 7485c4097d647612a24cbb2ca26bee76c37ae508 | code | 6355 | export reAUCluster_kernel
using Random, DelimitedFiles, SparseArrays, HypothesisTests, LinearAlgebra
include("roc.jl")
include("process_expr.jl")
function cluster_inter_auc(nmat::AbstractMatrix,
group_cell::Vector{Int64},
replace_cell::Int64,
gs_num::Int64,
positives::BitVector)
alter_cluster_sets = reduce(hcat, [[[group_cell[setdiff(1:end,i)];replace_cell]] for i in 1:gs_num])
new_group_auc = mapreduce(x-> roc_kernel(reshape(sum.(eachrow(nmat[:, x])),:,1), positives), hcat, alter_cluster_sets)
return new_group_auc
end
"""
Aside each cell into clusters.
"""
function classify_cell_cluster(nmat::AbstractMatrix,
max_group::Vector{Int64},
max_group_auc::Float64,
min_group::Vector{Int64},
min_group_auc::Float64,
all_group_sample::Vector{Int64},
diff_cell::Vector{Int64},
positives::BitVector)
gs_num = size(max_group)[1]
group2 = copy(min_group)
group1 = copy(max_group)
# # 每列是一个细胞依次替换初始簇后的各个均值
# println("计算时间")
# @time max_group_cell = reduce(hcat, [cluster_inter_auc(nmat,max_group,diff_cell[j],gs_num,positives) for j in 1:size(diff_cell)[1]])
# @time min_group_cell = reduce(hcat, [cluster_inter_auc(nmat,min_group,diff_cell[j],gs_num,positives) for j in 1:size(diff_cell)[1]])
# d_recell_gmax = sum(max_group_cell, dims = 2)/gs_num .- max_group_auc
# d_recell_gmin = sum(min_group_cell, dims = 2)/gs_num .- min_group_auc
# group1_local = (((d_recell_gmax .>= 0) .&& (d_recell_gmin .>= 0)) .|| (.!((d_recell_gmax .< 0) .&& (d_recell_gmin .< 0)) .&& (abs.(d_recell_gmax) .< abs.(d_recell_gmin))))
# @time max_group_cell = mapreduce(x-> cluster_inter_auc(nmat,max_group,x,gs_num,positives), hcat, diff_cell)
for j in 1:size(diff_cell)[1]
max_group_cell = cluster_inter_auc(nmat,max_group,diff_cell[j],gs_num,positives)
min_group_cell = cluster_inter_auc(nmat,min_group,diff_cell[j],gs_num,positives)
# 计算替换初始簇细胞后的平均AUC
d_recell_gmax = sum(max_group_cell)/gs_num - max_group_auc
d_recell_gmin = sum(min_group_cell)/gs_num - min_group_auc
# 当都大于0则划分到max簇;当都小于0则划分到min簇;其余直接比较大小,分类到距离小的
group1_local = (((d_recell_gmax .>= 0) .&& (d_recell_gmin .>= 0)) .|| (.!((d_recell_gmax .< 0) .&& (d_recell_gmin .< 0)) .&& (abs.(d_recell_gmax) .< abs.(d_recell_gmin))))
if group1_local
group1 = vcat(group1,diff_cell[j,:])
max_auc = findmax(max_group_cell)
if(max_auc[1] > max_group_auc)
max_group[max_auc[2][1]] = diff_cell[j]
max_group_auc = max_auc[1]
end
else
group2 = vcat(group2,diff_cell[j,:])
min_auc = findmin(min_group_cell)
if (min_auc[1] < min_group_auc)
min_group[min_auc[2][1]] = diff_cell[j]
min_group_auc = min_auc[1]
end
end
end
return group1,group2
end
"""
pathway_cluster mode. `pathway_cluster` is subgroups based on pathway activation.
# Examples
```jldoctest
julia> mode_pathway_cluster([[1,5,7] [6,4,3] [8,5,2]],[[1,5,7] [6,4,3] [8,5,2]],BitVector([0,1,1]),reshape([1,2,3],:,1),["sample1","sample2","sample3"])
0.005570 seconds (2.62 k allocations: 150.455 KiB, 97.78% compilation time)
1×5 Matrix{Any}:
["sample1"] ["sample2", "sample3"] NaN NaN [0.5 0.0 0.0]
```
"""
function mode_pathway_cluster(nmat::AbstractMatrix,
nmat_p::AbstractMatrix,
positives::BitVector,
it_group::AbstractMatrix,
barcodes::AbstractVector;
decreasing::Bool = true,
auc_only::Bool = true,# must be true
verbose::Bool = false)
expr_auc = roc_kernel(nmat_p, positives)
g_max = findmax(expr_auc)
g_min = findmin(expr_auc)
(g_max != g_min) || throw(ArgumentError("The AUC of all samples in this pathway remained consistent at $(g_max[1])"))
max_group = it_group[g_max[2][2],:]
min_group = it_group[g_min[2][2],:]
all_group_sample = vcat(max_group,min_group)
diff_cell = setdiff([1:size(nmat)[2]...],all_group_sample)
# group1为AUC最大的初始簇获得,group2由AUC最小的初始簇获得
@time group1,group2 = classify_cell_cluster(nmat,max_group,g_max[1],min_group,g_min[1],all_group_sample,diff_cell,positives)
# 返回分簇的样本结果,第一列为重分簇1,第二列为重分簇2,第三列为各样本的AUC值(顺序按照表达谱样本顺序)。
if (size(nmat) == size(nmat_p))
ttest_result = EqualVarianceTTest(expr_auc[:,group1][:],expr_auc[:,group2][:])
return [[barcodes[group1]] [barcodes[group2]] [[ttest_result.t]] [[pvalue(ttest_result)]] [expr_auc]]
else
expr_auc_all = roc_kernel(nmat, positives)
ttest_result = EqualVarianceTTest(expr_auc_all[:,group1][:],expr_auc_all[:,group2][:])
return [[barcodes[group1]] [barcodes[group2]] [[ttest_result.t]] [[pvalue(ttest_result)]] [expr_auc_all]]
end
end
function reAUCluster_kernel(
mat::AbstractMatrix, # Expression profiles matrix
features::AbstractVector, # Row names for the profiles
barcodes::AbstractVector, # Col names for the profiles
gene_set::AbstractVector; # A set of genes (features) to be tested or a group of gene-set (Vector of Vectors)
np::Int = 0, # If np > 0, pseudobulk mode will be turned on
remove_zeros::Bool = true # Whether clear those features with 0 expression values in all profiles
)
r, c = size(mat)
r == length(features) || throw(DimensionMismatch("'mat' and `features' do not have equal number of rows."))
fea = features
nmat = mat
if remove_zeros
keep = reshape(sum(mat, dims = 2), :) .> 0
nmat = nmat[keep,:]
fea = fea[keep]
end
if np > 0
nmat_p,it_group = generate_pseudobulk(nmat, np)
else
it_group = reshape([1:size(nmat)[2]...],:,1)
nmat_p = nmat
end
if typeof(first(gene_set)) <: Vector
return mapreduce(x-> mode_pathway_cluster(nmat, nmat_p, fea .∈ (x, ), it_group, barcodes), vcat, gene_set)
else
pos = fea .∈ (gene_set,)
return mode_pathway_cluster(nmat, nmat_p, fea .∈ (gene_set, ), it_group, barcodes)
end
end
| AUCell | https://github.com/yanjer/AUCell.jl.git |
|
[
"MIT"
] | 0.1.0 | 7485c4097d647612a24cbb2ca26bee76c37ae508 | code | 3768 | export aucell_kernel
using Random, DelimitedFiles, SparseArrays
include("process_expr.jl")
include("roc.jl")
"""
aucell_kernel(mat, features, gene_set)
Calculate the AUC for each `gene_set` in each profile of `mat` and row names of `mat` are stored as `features` which should be the same types with those in `gene_set`.
# Examples
```jldoctest
julia> mat = [1 2 3;4 5 6;0 1 2;7 8 0]
4×3 Matrix{Int64}:
1 2 3
4 5 6
0 1 2
7 8 0
julia> fea = ["a", "b", "c", "d"]
julia> gene_set = ["b","c", "e"]
julia> aucell_kernel(mat, fea, gene_set)
1×3 Matrix{Float64}:
0.125 0.125 0.5
julia> gene_sets = [["a", "b", "e"], ["b", "d", "e"]]
julia> aucell_kernel(mat, fea, gene_sets)
2×3 Matrix{Float64}:
0.25 0.25 0.75
0.75 0.75 0.375
```
"""
function aucell_kernel(
mat::AbstractMatrix, # Expression profiles matrix
features::AbstractVector, # Row names for the profiles
gene_set::AbstractVector; # A set of genes (features) to be tested or a group of gene-set (Vector of Vectors)
np::Int = 0, # If np > 0, pseudobulk mode will be turned on
x_threshold::Number = 1, # threshold for calculating AUC, (default:1, common AUC)
remove_zeros::Bool = true # Whether clear those features with 0 expression values in all profiles
)
r, c = size(mat)
r == length(features) || throw(DimensionMismatch("'mat' and `features' do not have equal number of rows."))
fea = features
nmat = mat
if remove_zeros
keep = reshape(sum(mat, dims = 2), :) .> 0
nmat = nmat[keep,:]
fea = fea[keep]
end
if np > 0
nmat = generate_pseudobulk(nmat, np)
end
if typeof(first(gene_set)) <: Vector
return mapreduce(x-> roc_kernel(nmat, fea .∈ (x, ), x_threshold = x_threshold), vcat, gene_set)
else
pos = fea .∈ (gene_set,)
return roc_kernel(nmat, pos, x_threshold = x_threshold)
end
end
"""
cell_marker_score(mat, features, barcodes, gene_set, group)
Given a single-cell RNA expression matrix `mat` with row-names of `features` and column-names of `barcodes`, calculate the relative cell type marker scores (0-1) for the ` gene_set`; the grouping information is specified in the `group` (vector of vectors, which store the cell barcodes in each group).
# Examples
```jldoctest
julia> mat = rand(0:32, 12, 8)
julia> features = 1:12
julia> gene_set = [1,5,6,8]
julia> barcodes = ["a", "b", "c", "d", "e", "f", "g", "h"]
julia> group = [["a", "b", "g", "h"], ["c", "d", "e", "f"]]
2-element Vector{Vector{String}}:
["a", "b", "g", "h"]
["c", "d", "e", "f"]
julia> cell_marker_score(mat, features, barcodes, gene_set, group)
4 genes are found among 4 genes.
1×2 Matrix{Float64}:
0.476227 0.523773
```
"""
function cell_marker_score(
mat::AbstractMatrix, # Expression profiles matrix
features::AbstractVector, # Row names for the profiles
bar::AbstractVector, # Row names for the profiles
gene_set::AbstractVector, # A set of cell-specific genes (expressed only in certain kind of cells, or upregulated)
group::AbstractVector
)
r, c = size(mat)
r == length(features) || throw(DimensionMismatch("`mat` and `features` do not have equal number of rows."))
c == length(bar) || throw(DimensionMismatch("`mat` and `bar` do not have equal number of columns."))
gen = unique(gene_set)
ind = filter(.!=(nothing), indexin(gen, features))
isnothing(ind) && throw("None of marker genes are found in the features.")
dat = mat[ind,:]
println(size(dat, 1), " genes are found among ", length(gen), " genes.")
if size(dat,1) == 0
return zeros(Float64, 1, length(group))
else
res = mapreduce(i -> mean(dat[:, filter(.!=(nothing),indexin(i, bar))], dims=2), hcat, group)
res = mapslices(x -> (sum(x) == 0 ? normalize(x .+ 1) : normalize(x, 1)), res, dims = 2)
return normalize(sum(res, dims = 1), 1)
end
end
| AUCell | https://github.com/yanjer/AUCell.jl.git |
|
[
"MIT"
] | 0.1.0 | 7485c4097d647612a24cbb2ca26bee76c37ae508 | code | 3230 | export filter_expr_matrix, generate_pseudobulk
using Random, DelimitedFiles, SparseArrays, HypothesisTests, LinearAlgebra
"""
filter_expr_matrix(mat, feature_threshold, cell_threshold)
Filter an expression matrix `mat`, only keep those genes expressed in greater than `feature_threshold` cells and cells expressing greater than `cell_threshold` features.
Return the filtered matrix and the bit vectors for keeping features and cells.
# Examples
```jldoctest
julia> @time mat, fea, bar = read_mtx("matrix.mtx", "features.tsv", "barcodes.tsv")
julia> size(mat)
(36601, 5744)
julia> @time mat2, kf, kb = filter_expr_matrix(mat)
26.438175 seconds (978.08 k allocations: 1.320 GiB, 0.52% gc time)
(sparse([2, 12, 15, 25, 26, 27, 29, 32, 34, 37 … 21104, 21105, 21106, 21107, 21108, 21109, 21110, 21111, 21113, 21116], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1 … 5728, 5728, 5728, 5728, 5728, 5728, 5728, 5728, 5728, 5728], Int32[1, 1, 5, 1, 4, 1, 1, 1, 1, 1 … 287, 8, 239, 124, 32, 8, 145, 41, 99, 2], 21121, 5728), Bool[0, 0, 0, 1, 0, 0, 1, 0, 0, 0 … 0, 0, 0, 0, 0, 0, 0, 0, 1, 0], Bool[1, 1, 1, 1, 1, 1, 1, 1, 1, 1 … 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
julia> size(mat2)
(21121, 5728)
julia> fea2 = fea[kf]; bar2 = bar[kb];
julia> length(fea2)
21121
julia> length(bar2)
5728
```
# Arguments
- `mat::AbstractMatrix`: expression matrix (either dense or sparse).
- `feature_threshold::Int`: the least number of cells that a feature must express in, in order to be kept. Default: 30.
- `cell_threshold::Int`: the least number of genes that a cell must express, in order to be kept. Default: 200.
"""
function filter_expr_matrix(mat::AbstractMatrix, feature_threshold::Int=30, cell_threshold::Int=200)
feature_threshold > 0 || throw("`feature_threshold` must be a positive integer.")
cell_threshold > 0 || throw("`cell_threshold` must be a positive integer.")
local nf, nc
if typeof(mat) <: SparseMatrixCSC
nc = mapslices(nnz, mat, dims = 1) # 1xc
nf = mapslices(nnz, mat, dims = 2) # rx1
else
nc =count(==(0), mat, dims = 1) # 1xc
nf =count(==(0), mat, dims = 2) # rx1
end
kf = reshape(nf .> feature_threshold, :)
kc = reshape(nc .> cell_threshold, :)
return (mat[kf, kc], kf, kc)
end
"""
generate_pseudobulk(mat, np)
Generate a matrix of pseudobulk profiles from `mat` which stores single-cell RNA profiles. Each column represents a cell's profile. Each pseudobulk profile is generated from `np` (default: 10) single-cell profiles.
# Examples
```jldoctest
julia> generate_pseudobulk(rand(0:32, 10, 6), 3)
10×2 Matrix{Int64}:
59 30
66 34
37 26
58 70
83 86
15 11
58 62
38 62
62 35
15 51
```
"""
function generate_pseudobulk(mat::AbstractMatrix, # Each column is a profile
np::Int = 10 # Number of profiles in each pseudobulk profile
)
r,c = size(mat)
np > 0 || throw("`np` must be a positive integer.")
c >= np || throw("There are not enough profiles to generate a pseduobulk profle.")
ind = randperm(c) # random permutation
ns = floor(Int, c/np) # Number of pseudobulk profiles generated
ind = reshape(ind[1:(ns*np)], np, :)
#TODO: `mapslices` and `sum` (with dims) cannot be combined together.
reduce(hcat, [sum(mat[:,i], dims = 2) for i in eachcol(ind)]),ind
end | AUCell | https://github.com/yanjer/AUCell.jl.git |
|
[
"MIT"
] | 0.1.0 | 7485c4097d647612a24cbb2ca26bee76c37ae508 | code | 7345 | export read_mtx, read_gmt, read_meta
using DelimitedFiles, SparseArrays
"""
read_gmt(fn)
Read in a GMT file (MSigDB gene set format), where `fn` is the file path.
# Examples
```jldoctest
julia> res = read_gmt("h.all.v7.5.1.symbols.gmt")
julia> gn, gs = read_gmt("h.all.v7.5.1.symbols.gmt")
```
"""
function read_gmt(fn::AbstractString)
isfile(fn) || throw("File $fn does not exist.")
gmt = map(x->split(x, "\t"), readlines(fn)) # tab (not just space) as field delimitnator
gn = first.(gmt) # First column stores the pathway (gene set) names
gs = map(x->strip.(x[3:end]), gmt) # Drop the first two columns (gene set name and URL), strip white space which may be introduced accidentally
return (gn, gs)
end
"""
read_gsf(fn [, delim = ','])
Read in a general gene set file, where `fn` is the file path and the fields are separated by the `delim` character (default: white space). Each row represents a gene set and the first column is the name of the set and the rest are the genes in the set.
# Examples
```jldoctest
julia> gn, gs = read_gsf("my_gene_set.csv", delim = ',')
julia> gn, gs = read_gsf("my_gene_set.tsv", delim = '\t')
julia> gn, gs = read_gsf("my_gene_set.tsv")
```
"""
function read_gsf(fn::AbstractString; delim::AbstractChar = ' ')
isfile(fn) || throw("File $fn does not exist.")
local gmt
if delim == ' '
gmt = map(split, readlines(fn)) # white space (see `isspace`) as field delimitnator
else
gmt = map(x->split(x, delim), readlines(fn))
end
gn = first.(gmt) # First column stores the pathway (gene set) names
gs = map(x->strip.(x[2:end]), gmt) # Drop the first columns (gene set name and URL), strip white space which may be introduced accidentally
return (gn, gs)
end
"""
read_expr_matrix(fn, rn, cn)
Read in an expression matrix stored in `fn` where its row names are stored in `rn` and column names are stored in `cn`.
It returns (matrix, vector of row names, vector of column names)
# Examples
```jldoctest
julia> mat, fea, bar = read_expr_matrix("matrix.csv", "features.tsv", "barcodes.tsv", matrix_delim = ',')
julia> mat, fea, bar = read_expr_matrix("matrix.txt", "features.tsv", "barcodes.tsv", matrix_delim = '\t')
julia> mat, fea, bar = read_expr_matrix("matrix.tsv", "features.tsv", "barcodes.tsv", matrix_delim = '\t')
julia> mat, fea, bar = read_expr_matrix("matrix.tsv", "features.tsv", "barcodes.tsv")
```
"""
function read_expr_matrix(fn::AbstractString,rn::AbstractString, cn::AbstractString; matrix_delim::AbstractChar = ' ')
isfile(fn) || throw("File $fn does not exist.")
isfile(rn) || throw("File $rn does not exist.")
isfile(cn) || throw("File $cn does not exist.")
mat = readdlm(fn)
fea = reshape(readdlm(rn), :)
cel = reshape(readdlm(cn), :)
r, c = size(mat)
r == length(fea) || throw(DimensionMismatch("`rn` does not match with`fn"))
c == length(cel) || throw(DimensionMismatch("`cn` does not match with`fn"))
return (mat, fea, cel)
end
#Read in 10X mtx format (MatrixMarket)
"""
read_mtx(fn, rn, cn)
Read in the common 10X single-cell RNA expression file in the MTX format (unzipped).
# Examples
```jldoctest
julia> @time mat, fea, bar = read_mtx("matrix.mtx", "features.tsv", "barcodes.tsv")
62.946154 seconds (481.84 M allocations: 13.082 GiB, 3.50% gc time)
(sparse([7, 27, 31, 44, 45, 46, 49, 52, 54, 58 … 36563, 36564, 36565, 36566, 36567, 36568, 36569, 36570, 36572, 36576], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1 … 5744, 5744, 5744, 5744, 5744, 5744, 5744, 5744, 5744, 5744], Int32[1, 1, 5, 1, 4, 1, 1, 1, 1, 1 … 287, 8, 239, 124, 32, 8, 145, 41, 99, 2], 36601, 5744), Any["ENSG00000243485", "ENSG00000237613", "ENSG00000186092", "ENSG00000238009", "ENSG00000239945", "ENSG00000239906", "ENSG00000241860", "ENSG00000241599", "ENSG00000286448", "ENSG00000236601" … "ENSG00000274175", "ENSG00000275869", "ENSG00000273554", "ENSG00000278782", "ENSG00000277761", "ENSG00000277836", "ENSG00000278633", "ENSG00000276017", "ENSG00000278817", "ENSG00000277196"], Any["AAACCCAAGAACAAGG-1", "AAACCCAAGCCTGAAG-1", "AAACCCAAGCTGAGTG-1", "AAACCCAAGTATTGCC-1", "AAACCCAGTCATGACT-1", "AAACCCATCGGAATTC-1", "AAACCCATCTGTCTCG-1", "AAACGAAAGCGGGTAT-1", "AAACGAAAGGTAGCCA-1", "AAACGAAAGTGGTGAC-1" … "TTTGGTTTCCACAGCG-1", "TTTGTTGCACCTCGTT-1", "TTTGTTGCAGCTGTTA-1", "TTTGTTGCATACCGTA-1", "TTTGTTGGTAGGACCA-1", "TTTGTTGGTGACAGGT-1", "TTTGTTGTCCACTTTA-1", "TTTGTTGTCCTATTGT-1", "TTTGTTGTCGCTCTAC-1", "TTTGTTGTCTCCAAGA-1"])
```
# Arguments
- `fn::AbstractString`: MTX file path .
- `rn::AbstractString`: features file path.
- `cn::AbstractString`: barcodes file path.
- ` T::Type`: Datatype in the MTX file. Default: Int32.
- `feature_col::Int`: which column is used as feature names. Default: 1 (first).
- `barcode_col::Int`: which column is used as barcode names. Default: 1 (first).
"""
function read_mtx(fn::AbstractString, rn::AbstractString, cn::AbstractString; T::Type = Int32, feature_col::Int = 2, barcode_col::Int = 1)
isfile(fn) || throw("File $fn does not exist.")
isfile(rn) || throw("File $rn does not exist.")
isfile(cn) || throw("File $cn does not exist.")
dat = readdlm(fn, T, comments = true, comment_char = '%')
r,c,n = dat[1,:]
fea = readdlm(rn, '\t', comments = true, comment_char = '%')
bar = readdlm(cn, '\t', comments = true, comment_char = '%')
rf, cf = size(fea)
rb, cb = size(bar)
if feature_col <= cf
fea = fea[:, feature_col]
else
fea = fea[:, 1]
end
if barcode_col <= cf
bar = bar[:, barcode_col]
else
bar = bar[:, 1]
end
fea = reshape(fea, :)
bar = reshape(bar, :)
r == length(fea) || throw(DimensionMismatch("`rn` does not match with`fn"))
c == length(bar) || throw(DimensionMismatch("`cn` does not match with`fn"))
mat = spzeros(T, r, c)
mapslices(x-> mat[x[1],x[2]] = x[3], dat[2:end,:], dims = 2)
dat = nothing
return (mat, fea, bar)
end
"""
read_meta(fn, group)
Read in a meta data file with the first row assumed to be the header and the row names assumed to be the profile names (cell barcodes).
Grouping information is specified by the column with the header name of `group`. If `group` is not found, the second column will be used.
It returns the grouped profile names (vector of vectors) and group names.
# Examples
```jldoctest
julia> grp, nam = read_meta("meta.tsv", "Cluster")
julia> length(grp)
12
julia> length.(grp)
12-element Vector{Int64}:
65
512
1057
647
654
326
680
369
1191
46
101
80
julia> length(nam)
12
```
"""
function read_meta(fn::AbstractString, group::AbstractString = "group"; delim::AbstractChar = '\t')
isfile(fn) || throw("File $fn does not exist.")
meta, header = readdlm(fn, delim, header = true)
r, c = size(meta)
c > 1 || throw("Meta file must have at least two columns and the first column should be cell barcodes (or other-type profile names).")
header = header[header .!= ""] # write.table in R will drop the column name for the column that stores the row names
length(header) == c || length(header) == c -1 || throw("Meta header does not match with the content.")
gi = findfirst(==(group), header)
if isnothing(gi) # if `group` is not found in the meta header, assume the second column (the first non-barcode column)
gi = 2
end
gi += c - length(header)
bar = meta[:,1]
grp = meta[:, gi]
nam = unique(grp)
ind = indexin(grp, nam)
return ([bar[ind .== i] for i in 1:length(nam)], nam)
end
| AUCell | https://github.com/yanjer/AUCell.jl.git |
|
[
"MIT"
] | 0.1.0 | 7485c4097d647612a24cbb2ca26bee76c37ae508 | code | 5195 | """
splitby(A::AbstractVector, by)
Split a vector into subsets by a function `by` which takes two consective elements in `A`. The vector is splited at where `by` returns `false`.
Returns a vector of subsets (iteration indices to `A`).
"""
function splitby(A::AbstractVector, by)
n = length(A)
n > 2 || error("A must have more than 2 elements.")
res = []
cur = [1]
for i in 2:length(A)
if by(A[i], A[i-1])
push!(cur, i)
else
push!(res, cur)
cur = [i]
end
end
push!(res, cur)
return res
end
#y-axis: Sensitivity, TPR: TP/(TP+FN)
#x-axis: 1-Specificity, FPR: FP/(TN+FP)
"""
roc_kernel(scores, positives)
Calculate the receiver operating curve for `scores` where the positive samples are marked as `true` in `positives`. If the keyword argument `decreasing` is set as `false`, it will assume that lower scores mean higher probability as a positive. The return value is a 3*n matrix, where the first column is '1-specificitiy' (false positive rate), the second column is 'sensitivity' (true positive rate) and the third column is the area under the curve (AUC) to the current point. However, if the keyword argument `auc_only` is set true, only the AUC is returned and `x_threshod` can be set to obtain the area under the curve less than the given 'FPR'.
If `scores` is a matrix, each column is assumed to be the score vector. The AUC will be returned these scores with the same ground truth specified by `positives`.
# Examples
```jldoctest
julia> roc_kernel(rand(10), BitVector(rand(Bool, 10)))
5×3 Matrix{Float64}:
0.0 0.0 0.0
0.25 0.166667 0.0208333
0.5 0.166667 0.0625
0.75 0.166667 0.104167
1.0 0.5 0.1875
julia> roc_kernel(rand(10), BitVector(rand(Bool, 10)), decreasing = false)
5×3 Matrix{Float64}:
0.0 0.0 0.0
0.25 0.0 0.0
0.5 0.166667 0.0208333
0.75 0.333333 0.0833333
1.0 1.0 0.25
julia> roc_kernel(rand(10, 5), BitVector(rand(Bool, 10)))
1×5 Matrix{Float64}:
0.190476 0.309524 0.357143 0.642857 0.357143
julia> roc_kernel(rand(10, 5), BitVector(rand(Bool, 10)), x_threshold = 0.5)
1×5 Matrix{Float64}:
0.666667 0.25 0.583333 0.375 0.416667
julia> roc_kernel(rand(10, 5), BitVector(rand(Bool, 10)), x_threshold = 0.5, verbose = true)
x= 1.0 y= 0.25
x= 1.0 y= 0.875
x= 1.0 y= 1.0
x= 1.0 y= 0.75
x= 1.0 y= 1.0
1×5 Matrix{Float64}:
0.125 0.46875 0.625 0.5 0.3125
```
# Arguments
- `scores::AbstractVector` or `AbstractMatrix`: the scores vector or matrix.
- `positives::BitVector`: the ground-truth vector.
- `decreasing::Bool = true`: score's direction.
- `auc_only::Bool = false`: whether to run the calculation in the AUC-only mode.
- `verbose::Bool = false`: the verbosity of output.
"""
function roc_kernel(
scores::AbstractVector, # Scores for each sample
positives::BitVector; # Whether a sample is a positive, (1 true, 0 false)
decreasing::Bool = true, # By default, higher scores mean more likely to be a positive; set it to `false` if lower scores mean higher probability
auc_only::Bool = false, # If `true`, return `auc` only
x_threshold::Number = 1, # return the Area under the curve to this FPR
verbose::Bool = false # whether output extra information
)
n = length(scores)
n == length(positives) || throw(DimensionMismatch("'scores' and 'positives' do not have equal number of rows."))
#tp = cumsum(positives[o]) # True positives
# TP + FP: = 1:n
#fp = (1:n) .- tp # False positives
# TP + FN = m
#/ fn = m .- fp
# FN + TN = n .- (1:n)
#/ tn = (n - m) .- tp
m = sum(positives) # total number of true positives
o = sortperm(scores, rev = decreasing) #Tied-rank issue.
scores_o = scores[o]
pos_o = positives[o]
scores_i = splitby(scores_o, ==)
tp = map(i->sum(pos_o[i]), scores_i)
fp = length.(scores_i) .- tp
tp = cumsum(tp)
fp = cumsum(fp)
y = vcat([0], (1/m) .* tp)
x = vcat([0], (1/(n-m)) .* fp)
rev_x = reverse(x)
ind = reverse(vcat([true], (rev_x[1:end-1] .!= rev_x[2:end])))
x = x[ind]
y = y[ind]
a = vcat([0], 0.5 * (diff(x) .* (y[1:end-1] .+ y[2:end]))) # Area under each interval
auc = cumsum(a)
if auc_only
ind = findfirst(x -> x >= x_threshold, auc)
if ind == nothing
verbose && println("x=\t", x[end], "\ty=\t", y[end])
return auc[end]
else
verbose && println("x=\t", x[ind], "\ty=\t", y[ind])
return auc[ind]
end
else
return hcat(x, y, auc)
end
end
# Matrix model for scores
function roc_kernel(
scores::AbstractMatrix,
positives::BitVector;
decreasing::Bool = true,
auc_only::Bool = true,# must be true
x_threshold::Number = 1, # return the Area under the curve to this FPR
verbose::Bool = false # whether output extra information
)
r, c = size(scores)
r == length(positives) || throw(DimensionMismatch("'scores' and 'positives' do not have equal number of rows."))
auc_only || throw("If `scores` is a matrix, it can only run in the `auc_only` mode.")
mapslices(x -> roc_kernel(x, positives, decreasing = decreasing, auc_only = true, x_threshold = x_threshold, verbose = verbose), scores, dims = 1)
end
| AUCell | https://github.com/yanjer/AUCell.jl.git |
|
[
"MIT"
] | 0.1.0 | 7485c4097d647612a24cbb2ca26bee76c37ae508 | docs | 19693 | # AUCell.jl
AUCell.jl is an algorithm for cell reclassification based on AUC values by feature genes in the pathway.
## 1 Used in the Julia language
### 1.1 Installation
The algorithm is implemented in Julia. Version 1.7 or later is recommended. The simpliest way to install is using the `Pkg` facility in Julia.
```julia
using Pkg
Pkg.add("AUCell.jl")
```
### 1.2 Examples
#### 1.2.1 Quick Start
Run a test job with the input files distributed with the package.
```julia
julia> using AUCell
# Use the default values for the following other parameters. If you need to modify the parameters, add them directly.
julia> result = pathway_AUC_main(use_testdata="yes")
```
The analysis results and a few plots will be generated and saved in the current work directory. They are also returned by the `pathway_AUC_main` function and can be captured by assign the returned values to a variable, e.g., `result` in the above example.
The first return value is a DataFrame, where rows are genes and columns are statistical values for each gene. All the genes passing the basic preprocessing step are retained.
```julia
julia> result
1.595452 seconds (8.44 M allocations: 279.644 MiB, 4.83% gc time, 77.52% compilation time)
[ Info: INFO: The size of expression profile was (36602, 8).
1.945127 seconds (4.95 M allocations: 260.557 MiB, 11.17% gc time, 96.92% compilation time)
[ Info: INFO: The filtered of expression profile size was (7549, 8).
0.000401 seconds (27 allocations: 34.641 KiB)
[ Info: INFO: There are 1 pathways to be analyzed.
0.660084 seconds (1.75 M allocations: 87.597 MiB, 3.11% gc time, 99.78% compilation time)
[ Info: INFO: According to the meta information, there are 2 groups of data and each group will be analyzed with the rest of the sample.
2.731819 seconds (6.61 M allocations: 365.662 MiB, 3.77% gc time, 94.64% compilation time)
2×17 Matrix{Any}:
"GeneSet" "group1" "group1" "group1" "group1" "group1" … "group2" "group2" "group2" "group2" "group2" "group2" "group2" "group2"
"HALLMARK_TNFA_SIGNALING_VIA_NFKB" 0.506962 0.500821 0.515332 0.529347 0.453294 0.506962 0.500821 0.515332 0.529347 0.453294 0.512858 0.482078 0.440029
```
#### 1.2.2 Run your own AUCell analysis
You need to prepare two input files before the analysis: pathway features gene file and expression profile file.
##### pathway features gene file
| Funtion | Format | Describe |
| ------------------------------------------------------- | ---------------------- | ------------------------------------------------------------ |
| read_gmt(fn::AbstractString) | | Read in a GMT file (MSigDB gene set format), where `fn` is the file path. |
| read_gsf(fn::AbstractString; delim::AbstractChar = ' ') | `.csv`, `.txt`, `.tsv` | Read in a general gene set file, where `fn` is the file path and the fields are separated by the `delim` character (default: white space). Each row represents a gene set and the first column is the name of the set and the rest are the genes in the set. |
##### 1.2.2.1 pathway features gene file
1) `read_gmt`: Read in a GMT file (MSigDB gene set format), where `fn` is the file path. `.gmt` (See [fn_feature.gmt](https://github.com/yanjer/AUCell/blob/master/HALLMARK_pathway/h_all_v2023_1_Hs_symbols.gmt))
1) `read_gsf`: Read in a general gene set file, where `fn` is the file path and the fields are separated by the `delim` character (default: white space). Each row represents a gene set and the first column is the name of the set and the rest are the genes in the set. `.csv`, `.txt` and `.tsv` are supported. (See `.csv`: [fn_feature.csv](https://github.com/yanjer/testdata-output/blob/master/AUCell_testdata/fn_feature.csv) or `.txt`: [fn_feature.txt](https://github.com/yanjer/testdata-output/blob/master/AUCell_testdata/fn_feature.txt) or `.tsv`: [fn_feature.tsv](https://github.com/yanjer/testdata-output/blob/master/AUCell_testdata/fn_feature.tsv))
##### 1.2.2.2 expression profile file
1. `read_mtx`: Read in the common 10X single-cell RNA expression file in the MTX format (unzipped). (See `fn`: [matrix.mtx](https://github.com/yanjer/AUCell/blob/master/test/matrix.mtx), `rn`: [features.tsv](https://github.com/yanjer/AUCell/blob/master/test/features.tsv), `cn`: [barcodes.tsv](https://github.com/yanjer/AUCell/blob/master/test/barcodes.tsv))
2. `read_expr_matrix`: Read in an expression matrix stored in `fn` where its row names are stored in `rn` and column names are stored in `cn`. (See `fn`: [matrix.csv](https://github.com/yanjer/testdata-output/blob/master/AUCell_testdata/matrix.csv) (`.csv`) or [matrix.txt](https://github.com/yanjer/testdata-output/blob/master/AUCell_testdata/matrix.txt) (`.txt`) or [matrix.tsv](https://github.com/yanjer/testdata-output/blob/master/AUCell_testdata/matrix.tsv) (`.tsv`); `rn`: [features.tsv](https://github.com/yanjer/AUCell/blob/master/test/features.tsv), `cn`: [barcodes.tsv](https://github.com/yanjer/AUCell/blob/master/test/barcodes.tsv))
##### 1.2.2.3 pathway features gene file
`read_meta`: Read in a meta data file with the first row assumed to be the header and the row names assumed to be the profile names (cell barcodes). Grouping information is specified by the column with the header name of `group`. If `group` is not found, the second column will be used. It returns the grouped profile names (vector of vectors) and group names. (See [fn_meta.txt](https://github.com/yanjer/AUCell/blob/master/test/fn_meta.txt))
```julia
julia> using AUCell
# Use the default values for the following other parameters. If you want to modify the parameters, add them directly.
julia> pathway_AUC_main("matrix.mtx",
"features.tsv",
"barcodes.tsv",
"fn_feature.gmt",
"fn_meta.txt")
```
Other parameters can be set by passing the value to the corresponding keyword.
```julia
pathway_AUC_main("matrix.mtx",
"features.tsv",
"barcodes.tsv",
"fn_feature.gmt",
"fn_meta.txt";
fn_meta_delim = '\t',
fn_meta_group = "group",
file_format_expr = "read_mtx",
T = Int32,
feature_col = 2,
barcode_col = 1,
rem_delim = ' ',
feature_threshold = 30,
cell_threshold = 200,
file_format_feature = "read_gmt",
fn_feature_delim = ' ',
use_HALLMARK_pathway = "no",
mode = "AUCell",
ncell_pseudo: = 0,
auc_x_threshold = 1.0,
remove_zeros = true,
use_testdata = "no",
work_dir = "./")
```
#### 1.2.3 Pseudobulk method
For scRNA-seq data, one can carry out a pseudobulk analysis. Rather than using the original single-cell profiles, pseudobulk profiles can be generated and used for DEG analysis. In this method, a random subset of cells from a group is aggregated into a pseudo-bulk profile.
The pseudobulk method can be turned on by setting `ncell_pseudo > 0`.
```julia
julia> pathway_AUC_main("matrix.mtx",
"features.tsv",
"barcodes.tsv",
"fn_feature.gmt",
"fn_meta.txt";
ncell_pseudo = 10)
```
`ncell_pseudo` is the number of pseudobulk combined cells in each group. By default, profiling does not use the pseudo-bulk method (`ncell_pseudo = 0`). 0 indicates that the pseudo-bulk mode is not used, and other values indicate how many cells are merged into a sample.
### 1.3 Optional Parameters
Below lists the optional keyword parameters and their default values.
| Parameter | Parameter types | Default value | Parameters to describe |
| -------------------- | --------------- | ---------------- | ------------------------------------------------------------ |
| fn_expr | AbstractString | "matrix.mtx" | MTX file path. (required). |
| rn_expr | AbstractString | "features.tsv" | features file path. (required) |
| cn_expr | AbstractString | "barcodes.tsv" | barcodes file path. (required) |
| fn_feature | AbstractString | "fn_feature.gmt" | Pathway feature gene set file path. (required) |
| fn_meta | AbstractString | "fn_meta.txt" | Grouping information file path. Read in a meta data file with the first row assumed to be the header and the row names assumed to be the profile names (cell barcodes). |
| fn_meta_delim | AbstractChar | '\t' | Delimiter of the metadata file data. |
| fn_meta_group | AbstractString | "group" | Grouping information is specified by the column with the header name of `group`. If `group` is not found, the second column will be used. |
| file_format_expr | AbstractString | "read_mtx" | There are two input modes "read_mtx" and "read_expr_matrix" for the expression profile file format. |
| T | Type | Int32 | Express the storage format of the spectrum input variable. |
| feature_col | Int | 2 | feature in the column. |
| barcode_col | Int | 1 | barcode in the column. |
| rem_delim | AbstractChar | ' ' | Enter the file separator when file_format_expr is "read_expr_matrix". |
| feature_threshold | Int | 30 | Include features (genes) detected in at least this many cells. |
| cell_threshold | Int | 200 | Include profiles (cells) where at least this many features are detected. |
| file_format_feature | AbstractString | "read_gmt" | There are two input modes "read_gmt" and "read_gsf" for the file format of the features contained in the pathways. |
| fn_feature_delim | AbstractChar | ' ' | Delimiter of the pathway features file data. |
| use_HALLMARK_pathway | AbstractString | "no" | Whether to use the built-in HALLMARK pathways. |
| mode | AbstractString | "AUCell" | "AUCell" is an optional mode to calculate AUC based on characteristic genes for two groups. "pathway_recluster" is subgroups based on pathway activation. |
| ncell_pseudo | Int | 0 | ncell_pseudo is the number of pseudobulk combined cells in each group. By default, profiling does not use the pseudo-bulk method (`ncell_pseudo= 0`). 0 indicates that the pseudo-bulk mode is not used, and other values indicate how many cells are merged into a sample. |
| auc_x_threshold | Float64 | 1.0 | Threshold for the X-axis (1-specificity) in the auc calculation, 0~auc_x_threshold. |
| remove_zeros | Bool | true | Whether to remove all cells with zero gene expression values. |
| work_dir | AbstractString | "./" | Working Directory. |
| use_testdata | AbstractString | "no" | Whether to use the default provided test data for analysis, yes or no. |
### 1.4 Example output file
#### 1.4.1 result
- The file content is the pathways AUC value for each group sample. Behavioral pathways, listed as samples. (See [aucell_result.tsv](https://github.com/yanjer/testdata-output/blob/master/AUCell_testdata_output/aucell_result.tsv)
#### 1.4.2 log file
- [AUCell_testdata.log](https://github.com/yanjer/testdata-output/blob/master/AUCell_testdata_output/AUCell_testdata.log)
## 2 Used in the R language
### 2.1 Installation
##### 2.1.1 You can install just like any other R packages by `JuliaCall`
```R
install.packages("JuliaCall")
```
##### 2.1.2 To use you must have a working installation of Julia. This can be easily done via: `JuliaCall`
```R
library(JuliaCall)
install_julia()
```
##### 2.1.3 which will automatically install and setup a version of Julia specifically for use with `JuliaCall`. Or you can do
```R
library(JuliaCall)
julia <-julia_setup()
```
##### 2.1.4 Download AUCell
```R
julia_install_package_if_needed("AUCell")
```
### 2.2 Examples
#### 2.2.1 Quick Start
Run a test job with the input files distributed with the package.
```R
julia_library("AUCell")
result <- julia_do.call("pathway_AUC_main",list(use_testdata="yes"),need_return="Julia",show_value=FALSE)
```
The analysis results and a few plots will be generated and saved in the current work directory. They are also returned by the `pathway_AUC_main` function and can be captured by assign the returned values to a variable, e.g., `result` in the above example.
The first return value is a DataFrame, where rows are genes and columns are statistical values for each gene. All the genes passing the basic preprocessing step are retained.
```R
> result
1.595452 seconds (8.44 M allocations: 279.644 MiB, 4.83% gc time, 77.52% compilation time)
[ Info: INFO: The size of expression profile was (36602, 8).
1.945127 seconds (4.95 M allocations: 260.557 MiB, 11.17% gc time, 96.92% compilation time)
[ Info: INFO: The filtered of expression profile size was (7549, 8).
0.000401 seconds (27 allocations: 34.641 KiB)
[ Info: INFO: There are 1 pathways to be analyzed.
0.660084 seconds (1.75 M allocations: 87.597 MiB, 3.11% gc time, 99.78% compilation time)
[ Info: INFO: According to the meta information, there are 2 groups of data and each group will be analyzed with the rest of the sample.
2.731819 seconds (6.61 M allocations: 365.662 MiB, 3.77% gc time, 94.64% compilation time)
2×17 Matrix{Any}:
"GeneSet" "group1" "group1" "group1" "group1" "group1" … "group2" "group2" "group2" "group2" "group2" "group2" "group2" "group2"
"HALLMARK_TNFA_SIGNALING_VIA_NFKB" 0.506962 0.500821 0.515332 0.529347 0.453294 0.506962 0.500821 0.515332 0.529347 0.453294 0.512858 0.482078 0.440029
```
#### 2.2.2 Run your own DEG analysis
You need to prepare four input files before the analysis: metadata file and expression matrix. You need to prepare two input files
##### 2.2.2.1 pathway features gene file
1) read_gmt`: Read in a GMT file (MSigDB gene set format), where `fn` is the file path. `.gmt` (See [fn_feature.gmt](https://github.com/yanjer/AUCell/blob/master/HALLMARK_pathway/h_all_v2023_1_Hs_symbols.gmt))
1) `read_gsf`: Read in a general gene set file, where `fn` is the file path and the fields are separated by the `delim` character (default: white space). Each row represents a gene set and the first column is the name of the set and the rest are the genes in the set. `.csv`, `.txt` and `.tsv` are supported. (See `.csv`: [fn_feature.csv](https://github.com/yanjer/testdata-output/blob/master/AUCell_testdata/fn_feature.csv) or `.txt`: [fn_feature.txt](https://github.com/yanjer/testdata-output/blob/master/AUCell_testdata/fn_feature.txt) or `.tsv`: [fn_feature.tsv](https://github.com/yanjer/testdata-output/blob/master/AUCell_testdata/fn_feature.tsv))
##### 2.2.2.2 expression profile file
1. `read_mtx`: Read in the common 10X single-cell RNA expression file in the MTX format (unzipped). (See `fn`: [matrix.mtx](https://github.com/yanjer/AUCell/blob/master/test/matrix.mtx), `rn`: [features.tsv](https://github.com/yanjer/AUCell/blob/master/test/features.tsv), `cn`: [barcodes.tsv](https://github.com/yanjer/AUCell/blob/master/test/barcodes.tsv))
2. `read_expr_matrix`: Read in an expression matrix stored in `fn` where its row names are stored in `rn` and column names are stored in `cn`. (See `fn`: [matrix.csv](https://github.com/yanjer/testdata-output/blob/master/AUCell_testdata/matrix.csv) (`.csv`) or [matrix.txt](https://github.com/yanjer/testdata-output/blob/master/AUCell_testdata/matrix.txt) (`.txt`) or [matrix.tsv](https://github.com/yanjer/testdata-output/blob/master/AUCell_testdata/matrix.tsv) (`.tsv`); `rn`: [features.tsv](https://github.com/yanjer/AUCell/blob/master/test/features.tsv), `cn`: [barcodes.tsv](https://github.com/yanjer/AUCell/blob/master/test/barcodes.tsv))
##### 2.2.2.3 pathway features gene file
`read_meta`: Read in a meta data file with the first row assumed to be the header and the row names assumed to be the profile names (cell barcodes). Grouping information is specified by the column with the header name of `group`. If `group` is not found, the second column will be used. It returns the grouped profile names (vector of vectors) and group names. (See [fn_meta.txt](https://github.com/yanjer/AUCell/blob/master/test/fn_meta.txt))
Once the files are ready, you can carry out the AUCell analysis with the default settings as follows.
```R
julia_library("AUCell")
# Use the default values for the following other parameters. If you want to modify the parameters, add them directly.
julia_do.call("reoa",list("matrix.mtx",
"features.tsv",
"barcodes.tsv",
"fn_feature.gmt",
"fn_meta.txt"),need_return="Julia",show_value=FALSE)
```
Other parameters can be set by passing the value to the corresponding keyword.
```R
julia_do.call("reoa",list("matrix.mtx",
"features.tsv",
"barcodes.tsv",
"fn_feature.gmt",
"fn_meta.txt";
fn_meta_delim = '\t',
fn_meta_group = "group",
file_format_expr = "read_mtx",
T = Int32,
feature_col = 2,
barcode_col = 1,
rem_delim = ' ',
feature_threshold = 30,
cell_threshold = 200,
file_format_feature = "read_gmt",
fn_feature_delim = ' ',
use_HALLMARK_pathway = "no",
mode = "AUCell",
ncell_pseudo: = 0,
auc_x_threshold = 1.0,
remove_zeros = true,
use_testdata = "no",
work_dir = "./"),need_return="Julia",show_value=FALSE)
```
#### 2.2.3 Pseudobulk method
For scRNA-seq data, one can carry out a pseudobulk analysis. Rather than using the original single-cell profiles, pseudobulk profiles can be generated and used for DEG analysis. In this method, a random subset of cells from a group is aggregated into a pseudo-bulk profile.
The pseudobulk method can be turned on by setting `ncell_pseudo > 0`.
```R
julia_do.call("reoa",list("matrix.mtx",
"features.tsv",
"barcodes.tsv",
"fn_feature.gmt",
"fn_meta.txt";
ncell_pseudo = 10),need_return="Julia",show_value=FALSE)
```
`ncell_pseudo` is the number of pseudobulk combined cells in each group. By default, profiling does not use the pseudo-bulk method (`ncell_pseudo = 0`). 0 indicates that the pseudo-bulk mode is not used, and other values indicate how many cells are merged into a sample.
### 2.3 Optional Parameters
**See** 1.3 Optional Parameters.
### 2.4 Example output file
**See** 1.4 Example output file.
| AUCell | https://github.com/yanjer/AUCell.jl.git |
|
[
"MIT"
] | 0.3.2 | f1980bf82ef171e8de32482397ab786303af3b15 | code | 477 | push!(LOAD_PATH, "../src/")
using Documenter
using SpeechFeatures
DocMeta.setdocmeta!(SpeechFeatures, :DocTestSetup,
:(using SpeechFeatures), recursive = true)
makedocs(
sitename="SpeechFeatures",
format = Documenter.HTML(prettyurls = get(ENV, "CI", nothing) == "true"),
pages = [
"Home" => "index.md",
"Extracting Features" => "feaextract.md"
]
)
deploydocs(
repo = "github.com/lucasondel/SpeechFeatures.jl.git",
)
| SpeechFeatures | https://github.com/lucasondel/SpeechFeatures.jl.git |
|
[
"MIT"
] | 0.3.2 | f1980bf82ef171e8de32482397ab786303af3b15 | code | 43992 | ### A Pluto.jl notebook ###
# v0.16.1
using Markdown
using InteractiveUtils
# ╔═╡ 75ae3354-2aaa-11ec-1805-d1efd04acf08
begin
using LinearAlgebra
using Plots
using PlutoUI
using WAV
using SpeechFeatures
using FFTW
using PaddedViews
end
# ╔═╡ f87589e3-c5d7-41b5-b376-bcf9eec006d1
md"""
# MFCC features explained
*[Lucas Ondel](https://lucasondel.github.io/), October 2021*
In this notebook, we show step-by-step how to compute the Mel-Frequency Cepstral Coefficient (MFCC) features.
"""
# ╔═╡ d9026f53-756d-4862-a258-f9663a9a76a2
md"""
We will use mostly two packages: [WAV.jl](https://github.com/dancasimiro/WAV.jl) to load the WAV data and [SpeechFeatures.jl](https://github.com/lucasondel/SpeechFeatures.jl) which provides utility function to extract the features.
"""
# ╔═╡ 319b69f9-6c9d-4d22-9896-055800cf5de8
TableOfContents()
# ╔═╡ 844d4433-bc74-472b-9723-d4136bf56f0f
md"""
## Loading the data
We will work with a sample from the TIMIT corpus freely available on the LDC website. First, we download it in the directory of this notebook.
"""
# ╔═╡ 86a58676-7f23-4e45-8ffb-0413e00e3237
wavpath = download("https://catalog.ldc.upenn.edu/desc/addenda/LDC93S1.wav",
joinpath(@__DIR__, "sample.wav"))
# ╔═╡ f6647baa-e24a-4c67-9c1c-ae95cd9239e4
md"""
Now, we read it using the `wavread` function. This function returns the channel matrix of size $S \times C$ and the sampling frequency.
"""
# ╔═╡ 4cd9e50b-6e12-48e0-812d-00af1598b32c
channels, fs = wavread(wavpath; format="double")
# ╔═╡ f2227028-3926-4864-9330-33cacc6349be
md"""
Each channel corresponds to the recording of one microphone. Concretely, if you have a mono recording you will have one channel, if you have stereo recording you will have 2 channels, etc.
Here, we only take the first channel (the data is mono anyway).
"""
# ╔═╡ ab6e2ce4-5941-4441-ae1d-7417a9b2b84e
x = channels[:, 1]
# ╔═╡ 786d833c-4a58-48d3-9e6e-b7869fd02a2e
md"""
Now we can plot the waveform.
"""
# ╔═╡ 8d116895-703f-4fd5-a3a9-aa8925ef7461
plot((1:length(x)) ./ fs, x; xlabel="Time (s)", legend=false)
# ╔═╡ 8daea702-d679-4ef0-96d5-230f597889a6
md"""
## Dithering
To avoid having frequency components with 0 energy, we add a tiny bit of white noise to the signal.
"""
# ╔═╡ db90b23f-d363-432d-a2e2-5772bf1657ba
dithering = 1e-12
# ╔═╡ 0a9c2db4-bd6e-42e5-874f-28f75b5385c5
x .+= randn(length(x)) .* dithering
# ╔═╡ 0a2780df-8fee-4b27-a944-3e0c7f2aa053
md"""
## DC removal
In general, most signals will be centered around zero. However, if this is not the case, this can introduce an undesired bias. To avoid this, we simply remove the Direct Component (DC) as a first step.
"""
# ╔═╡ da662210-d760-4989-b6c3-99c58395514f
x .-= mean(x)
# ╔═╡ 8bbd7a37-c714-4f64-81d0-48a18717336b
md"""
## Getting frames
The MFCC features are based on short-term spectral representation of the signal. Therefore, we need to divide the signal into overlapping frames that will be later transformed to the spectral domain.
The frame extraction has 2 parameters:
* the frame duration
* the step, i.e. the time between the beginning of two adjacent frames
"""
# ╔═╡ 70ef4159-f09a-4e2d-a266-c86972a6a611
frameduration = 0.025 # in second
# ╔═╡ ce94d7c3-5814-4805-a5c5-bf6e56c412ff
framestep = 0.01 # in second
# ╔═╡ b457c84c-50aa-43aa-84d6-d38cff22883b
md"""
To get the size of a frame in terms of number of samples, we multiply the frame duration and the sampling frequency.
"""
# ╔═╡ 2623d5d5-ba06-4247-8929-5d98d8c65c89
framesize = Int64(frameduration * fs)
# ╔═╡ 65e7fb61-4fb5-4826-8487-2c613b782773
X = hcat(SpeechFeatures.eachframe(x; srate=fs, frameduration, framestep)...)
# ╔═╡ 045b825e-47ed-462f-912d-3954812651a8
md"""
## Pre-emphasis
The purpose of this step is to increase the dynamic range of the high-frequency components. The pre-emphasis filter is defined as:
```math
y[n] = x[n] - k \cdot x[n-1],
```
where $k \in [0, 1]$. In general, we set $k = 0.97$.
"""
# ╔═╡ de44637f-2f24-4a5a-b1f3-1f5dd90f85af
function preemphasis(x; k = 0.97)
y = similar(x)
y[1] = x[1]
prev = x[1]
for i in 2:length(x)
y[i] = x[i] - k*prev
prev = x[i]
end
y
end
# ╔═╡ 8f876646-f9bf-489a-8002-607d38eee4e9
prmX = hcat(map(preemphasis, eachcol(X))...)
# ╔═╡ d49fd625-0415-44b4-94a9-94d2780aa0c3
md"""
## Windowing
Each frame, will be multiplied by a windowing function. Here we compare different type of windows.
"""
# ╔═╡ dfa1065d-29ad-443e-930c-33ae740652d7
# The exponent is used in the Kaldi features extraction.
# The idea is to get a Hamming-like window which goes to 0
# at the edge.
hann = SpeechFeatures.HannWindow(framesize) .^ 0.85
# ╔═╡ b1b54b9e-bca0-4e65-901d-6cf690331e2c
hamming = SpeechFeatures.HammingWindow(framesize)
# ╔═╡ 9fd33feb-e0c2-45fc-80a2-baa9fe9bbcd3
rectangular = SpeechFeatures.RectangularWindow(framesize)
# ╔═╡ 717bf350-954e-4e33-95ef-063c89fe90ae
begin
plot((1:framesize) ./ fs, hann; linewidth=2, yrange=(0,1), label = "Hann")
plot!((1:framesize) ./ fs, hamming; linewidth=2, yrange=(0,1), label = "Hamming")
plot!((1:framesize) ./ fs, rectangular; linewidth=2, yrange=(0,1), label = "Rectangular")
end
# ╔═╡ 19148014-f27e-4821-946e-fb68345a7641
md"""
Change the line below to select the window.
"""
# ╔═╡ f283f94f-993a-4156-b606-8014aae341ca
window = hann
# ╔═╡ 95b9d153-4934-45d8-b9f3-138d93757bfb
md"""
Finally, we multiply (in-place) the window on each frame.
"""
# ╔═╡ f4f33068-88f2-4b23-bb7b-47abc9e34bac
wX = hcat(map(x -> x .* window, eachcol(prmX))...)
# ╔═╡ e2b3d74c-9199-4c03-8405-fb19f171fd05
md"""
## Short-term spectrum
Now, we have compute the Fourier transform for each fame.
For efficiency reason, it is common that we compute the Fast Fourier Transform (FFT) on vector of length being a power of two. For this, we simply take the first power of two larger than the current frame size.
"""
# ╔═╡ d4cb69e2-fd46-4bc6-ae1b-8e041e015f76
fftlen = Int64(2^ceil(log2(size(wX, 1))))
# ╔═╡ d18010a6-6ac5-493f-92fc-590bf6bd6fe3
md"""
We pad the frames with zeros at the end to match the size of the FFT.
"""
# ╔═╡ 3ae4936f-faa9-45ac-90cd-c2a1bc782550
pX = PaddedView(0, wX, (fftlen, size(wX, 2)))[1:end,:]
# ╔═╡ 831accf5-fa88-492c-9a9e-6e7e58a6ce91
md"""
Get the log-magnitude spectrum of each frame.
"""
# ╔═╡ 68ef4c2f-eb3c-458f-96ad-a301754bc469
S = abs.(rfft(pX, 1)[1:end-1,:])
# ╔═╡ 013a0118-8181-461e-bc8f-fb6479787383
heatmap((1:size(S,2)) ./ 100, (fs/2) .* (0:size(S,1)-1) ./ size(S,1), S;
xlabel="time (s)", ylabel="frequency (Hz)")
# ╔═╡ 6c94c82f-837b-4bcc-8db8-79ad8a0382d4
md"""
## Filter bank
The short-term spectrum we have extracted is useful but it is not very faithful of what humans actually perceive. Actually, our spectral resolution is much lower and it maps non-linearly with the frequency range.
The mel-scale is an approximation of the human frequency-perception. It is given by:
```math
m = 1127 \ln(1 + \frac{f}{700})
```
"""
# ╔═╡ 5de5c24d-5407-4001-96a1-21094719c65f
plot(0.1:0.1:8000, SpeechFeatures.freq2mel.(0.1:0.1:8000);
xlabel="Hertz", ylabel="Mel", legend=false)
# ╔═╡ 6398bf0b-295e-4c6d-a9fa-0df8c1bd2807
md"""
We create a set of 26 filters whose centers are equally spaced on the mel-scale.
"""
# ╔═╡ 20c2333c-f368-4077-86ef-794e849adb0a
fbank = SpeechFeatures.FilterBank(26; srate=fs, fftlen, lofreq=20, hifreq=7600)
# ╔═╡ 406bce61-409c-4d3e-8d50-930f4b48387b
plot((fs/2) .* (1:size(fbank,2)) ./ size(fbank,2), fbank';
size=(800, 400), legend=false, xlabel="frequency (Hz)")
# ╔═╡ df2ee608-be7f-44d6-bab8-41a67fbe9e48
md"""
Applying the filter bank in the spectral domain amounts to multily the frame matrix with the filter bank matrix.
"""
# ╔═╡ 39b9c15e-8459-45a3-b4de-9bada6203580
fS = fbank * S
# ╔═╡ f19ba4e7-8625-43b9-a989-95e3f7ab1825
heatmap((1:size(fS,2)) ./ 100, 1:size(fS,1), fS;
xlabel="time (s)")
# ╔═╡ 664a2a9b-d12a-4230-b0bb-c4eb32dbd253
md"""
Furthermore, we get our spectrum in the log-domain to reduce the dynamic range.
"""
# ╔═╡ 07c8b5e6-0f95-49ea-8fb8-1aa24c6bc11c
lS = log.(fS)
# ╔═╡ 82550777-784f-4c97-86e2-1e0bad53f9ae
heatmap((1:size(fS,2)) ./ 100, 1:size(fS,1), lS;
xlabel="time (s)")
# ╔═╡ b0f01b5a-40a8-4bc1-bab6-ad9ea1daff73
md"""
## DCT
We can decorrelate and reduce the dimension of the the features by applying a Discrete Cosine Transform (DCT). By doing so, our features are now in the "cepstral" domain.
"""
# ╔═╡ ce9a8367-48d0-4947-8499-50b674d763ea
nceps = 13
# ╔═╡ d2c573f1-58b2-4104-b619-56cfbb522063
C = dct(lS, 1)[1:nceps,:]
# ╔═╡ 057c98a6-9878-4d51-aede-f77603af7e16
heatmap((1:size(C,2)) ./ 100, 1:size(C,1), C;
xlabel="time (s)")
# ╔═╡ 09528ac5-ffda-4d0a-b7ce-522722593644
md"""
## Liftering
Now we "lifter" (i.e. filtering in the cepstral domain) to even the dynamic ranges across cepstral coefficients.
"""
# ╔═╡ 9c9d7293-68c7-4d66-bea0-1743019bf9dc
function makelifter(N, L)
t = Vector(1:N)
1 .+ L/2 * sin.(π * t / L)
end
# ╔═╡ f97d744f-be53-42a4-8800-e83d4440b0e6
lifter = makelifter(size(C,1), 22)
# ╔═╡ 857e1fa9-6997-4c96-90fa-ae0fbb9e8cc2
plot(lifter, legend=false)
# ╔═╡ 83af226d-f60f-461e-8c28-835160d5c270
lC = hcat(map(x -> x .* lifter, eachcol(C))...)
# ╔═╡ b8180e1c-d698-44ec-9372-a7d8f133b3f1
heatmap((1:size(lC,2)) ./ 100, 1:size(lC,1), lC;
xlabel="time (s)")
# ╔═╡ 0582ef6f-dcd2-42c2-a1bd-8aac011cf166
md"""
## Dynamic features
Finally, we add the first and second derivatives to the signal. The derivatives are
calculated as:
```math
\dot{x} \approx \frac{\sum_{k=1}^K k \cdot (x[n+k] - x[n-k]) }{2\sum_{k=1}^K k^2}
```
"""
# ╔═╡ 09ede491-cb56-4327-b2e0-6e10b3a5483d
Δ = SpeechFeatures.delta(C, 2)
# ╔═╡ 9c0fd83e-9217-4516-a4e6-9566a7e78b31
ΔΔ = SpeechFeatures.delta(Δ, 2)
# ╔═╡ af564d77-dc11-4125-bde3-1f07c4521937
features = vcat(C, Δ, ΔΔ)
# ╔═╡ 00000000-0000-0000-0000-000000000001
PLUTO_PROJECT_TOML_CONTENTS = """
[deps]
FFTW = "7a1cc6ca-52ef-59f5-83cd-3a7055c09341"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
PaddedViews = "5432bcbf-9aad-5242-b902-cca2824c8663"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
PlutoUI = "7f904dfe-b85e-4ff6-b463-dae2292396a8"
SpeechFeatures = "6f3487c4-5ca2-4050-bfeb-2cf56df92307"
WAV = "8149f6b0-98f6-5db9-b78f-408fbbb8ef88"
[compat]
FFTW = "~1.4.5"
PaddedViews = "~0.5.10"
Plots = "~1.22.5"
PlutoUI = "~0.7.16"
SpeechFeatures = "~0.3.0"
WAV = "~1.1.1"
"""
# ╔═╡ 00000000-0000-0000-0000-000000000002
PLUTO_MANIFEST_TOML_CONTENTS = """
# This file is machine-generated - editing it directly is not advised
[[AbstractFFTs]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "485ee0867925449198280d4af84bdb46a2a404d0"
uuid = "621f4979-c628-5d54-868e-fcf4e3e8185c"
version = "1.0.1"
[[Adapt]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "84918055d15b3114ede17ac6a7182f68870c16f7"
uuid = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
version = "3.3.1"
[[ArgTools]]
uuid = "0dad84c5-d112-42e6-8d28-ef12dabb789f"
[[Artifacts]]
uuid = "56f22d72-fd6d-98f1-02f0-08ddc0907c33"
[[Base64]]
uuid = "2a0f44e3-6c83-55bd-87e4-b1978d98bd5f"
[[Bzip2_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "19a35467a82e236ff51bc17a3a44b69ef35185a2"
uuid = "6e34b625-4abd-537c-b88f-471c36dfa7a0"
version = "1.0.8+0"
[[Cairo_jll]]
deps = ["Artifacts", "Bzip2_jll", "Fontconfig_jll", "FreeType2_jll", "Glib_jll", "JLLWrappers", "LZO_jll", "Libdl", "Pixman_jll", "Pkg", "Xorg_libXext_jll", "Xorg_libXrender_jll", "Zlib_jll", "libpng_jll"]
git-tree-sha1 = "f2202b55d816427cd385a9a4f3ffb226bee80f99"
uuid = "83423d85-b0ee-5818-9007-b63ccbeb887a"
version = "1.16.1+0"
[[ChainRulesCore]]
deps = ["Compat", "LinearAlgebra", "SparseArrays"]
git-tree-sha1 = "74e8234fb738c45e2af55fdbcd9bfbe00c2446fa"
uuid = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
version = "1.8.0"
[[ColorSchemes]]
deps = ["ColorTypes", "Colors", "FixedPointNumbers", "Random"]
git-tree-sha1 = "a851fec56cb73cfdf43762999ec72eff5b86882a"
uuid = "35d6a980-a343-548e-a6ea-1d62b119f2f4"
version = "3.15.0"
[[ColorTypes]]
deps = ["FixedPointNumbers", "Random"]
git-tree-sha1 = "024fe24d83e4a5bf5fc80501a314ce0d1aa35597"
uuid = "3da002f7-5984-5a60-b8a6-cbb66c0b333f"
version = "0.11.0"
[[Colors]]
deps = ["ColorTypes", "FixedPointNumbers", "Reexport"]
git-tree-sha1 = "417b0ed7b8b838aa6ca0a87aadf1bb9eb111ce40"
uuid = "5ae59095-9a9b-59fe-a467-6f913c188581"
version = "0.12.8"
[[Compat]]
deps = ["Base64", "Dates", "DelimitedFiles", "Distributed", "InteractiveUtils", "LibGit2", "Libdl", "LinearAlgebra", "Markdown", "Mmap", "Pkg", "Printf", "REPL", "Random", "SHA", "Serialization", "SharedArrays", "Sockets", "SparseArrays", "Statistics", "Test", "UUIDs", "Unicode"]
git-tree-sha1 = "31d0151f5716b655421d9d75b7fa74cc4e744df2"
uuid = "34da2185-b29b-5c13-b0c7-acf172513d20"
version = "3.39.0"
[[CompilerSupportLibraries_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "e66e0078-7015-5450-92f7-15fbd957f2ae"
[[Contour]]
deps = ["StaticArrays"]
git-tree-sha1 = "9f02045d934dc030edad45944ea80dbd1f0ebea7"
uuid = "d38c429a-6771-53c6-b99e-75d170b6e991"
version = "0.5.7"
[[DataAPI]]
git-tree-sha1 = "cc70b17275652eb47bc9e5f81635981f13cea5c8"
uuid = "9a962f9c-6df0-11e9-0e5d-c546b8b5ee8a"
version = "1.9.0"
[[DataStructures]]
deps = ["Compat", "InteractiveUtils", "OrderedCollections"]
git-tree-sha1 = "7d9d316f04214f7efdbb6398d545446e246eff02"
uuid = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
version = "0.18.10"
[[DataValueInterfaces]]
git-tree-sha1 = "bfc1187b79289637fa0ef6d4436ebdfe6905cbd6"
uuid = "e2d170a0-9d28-54be-80f0-106bbe20a464"
version = "1.0.0"
[[Dates]]
deps = ["Printf"]
uuid = "ade2ca70-3891-5945-98fb-dc099432e06a"
[[DelimitedFiles]]
deps = ["Mmap"]
uuid = "8bb1440f-4735-579b-a4ab-409b98df4dab"
[[Distributed]]
deps = ["Random", "Serialization", "Sockets"]
uuid = "8ba89e20-285c-5b6f-9357-94700520ee1b"
[[DocStringExtensions]]
deps = ["LibGit2"]
git-tree-sha1 = "a32185f5428d3986f47c2ab78b1f216d5e6cc96f"
uuid = "ffbed154-4ef7-542d-bbb7-c09d3a79fcae"
version = "0.8.5"
[[Downloads]]
deps = ["ArgTools", "LibCURL", "NetworkOptions"]
uuid = "f43a241f-c20a-4ad4-852c-f6b1247861c6"
[[EarCut_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "3f3a2501fa7236e9b911e0f7a588c657e822bb6d"
uuid = "5ae413db-bbd1-5e63-b57d-d24a61df00f5"
version = "2.2.3+0"
[[Expat_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "b3bfd02e98aedfa5cf885665493c5598c350cd2f"
uuid = "2e619515-83b5-522b-bb60-26c02a35a201"
version = "2.2.10+0"
[[FFMPEG]]
deps = ["FFMPEG_jll"]
git-tree-sha1 = "b57e3acbe22f8484b4b5ff66a7499717fe1a9cc8"
uuid = "c87230d0-a227-11e9-1b43-d7ebe4e7570a"
version = "0.4.1"
[[FFMPEG_jll]]
deps = ["Artifacts", "Bzip2_jll", "FreeType2_jll", "FriBidi_jll", "JLLWrappers", "LAME_jll", "Libdl", "Ogg_jll", "OpenSSL_jll", "Opus_jll", "Pkg", "Zlib_jll", "libass_jll", "libfdk_aac_jll", "libvorbis_jll", "x264_jll", "x265_jll"]
git-tree-sha1 = "d8a578692e3077ac998b50c0217dfd67f21d1e5f"
uuid = "b22a6f82-2f65-5046-a5b2-351ab43fb4e5"
version = "4.4.0+0"
[[FFTW]]
deps = ["AbstractFFTs", "FFTW_jll", "LinearAlgebra", "MKL_jll", "Preferences", "Reexport"]
git-tree-sha1 = "463cb335fa22c4ebacfd1faba5fde14edb80d96c"
uuid = "7a1cc6ca-52ef-59f5-83cd-3a7055c09341"
version = "1.4.5"
[[FFTW_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "c6033cc3892d0ef5bb9cd29b7f2f0331ea5184ea"
uuid = "f5851436-0d7a-5f13-b9de-f02708fd171a"
version = "3.3.10+0"
[[FileIO]]
deps = ["Pkg", "Requires", "UUIDs"]
git-tree-sha1 = "3c041d2ac0a52a12a27af2782b34900d9c3ee68c"
uuid = "5789e2e9-d7fb-5bc7-8068-2c6fae9b9549"
version = "1.11.1"
[[FixedPointNumbers]]
deps = ["Statistics"]
git-tree-sha1 = "335bfdceacc84c5cdf16aadc768aa5ddfc5383cc"
uuid = "53c48c17-4a7d-5ca2-90c5-79b7896eea93"
version = "0.8.4"
[[Fontconfig_jll]]
deps = ["Artifacts", "Bzip2_jll", "Expat_jll", "FreeType2_jll", "JLLWrappers", "Libdl", "Libuuid_jll", "Pkg", "Zlib_jll"]
git-tree-sha1 = "21efd19106a55620a188615da6d3d06cd7f6ee03"
uuid = "a3f928ae-7b40-5064-980b-68af3947d34b"
version = "2.13.93+0"
[[Formatting]]
deps = ["Printf"]
git-tree-sha1 = "8339d61043228fdd3eb658d86c926cb282ae72a8"
uuid = "59287772-0a20-5a39-b81b-1366585eb4c0"
version = "0.4.2"
[[FreeType2_jll]]
deps = ["Artifacts", "Bzip2_jll", "JLLWrappers", "Libdl", "Pkg", "Zlib_jll"]
git-tree-sha1 = "87eb71354d8ec1a96d4a7636bd57a7347dde3ef9"
uuid = "d7e528f0-a631-5988-bf34-fe36492bcfd7"
version = "2.10.4+0"
[[FriBidi_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "aa31987c2ba8704e23c6c8ba8a4f769d5d7e4f91"
uuid = "559328eb-81f9-559d-9380-de523a88c83c"
version = "1.0.10+0"
[[GLFW_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Libglvnd_jll", "Pkg", "Xorg_libXcursor_jll", "Xorg_libXi_jll", "Xorg_libXinerama_jll", "Xorg_libXrandr_jll"]
git-tree-sha1 = "dba1e8614e98949abfa60480b13653813d8f0157"
uuid = "0656b61e-2033-5cc2-a64a-77c0f6c09b89"
version = "3.3.5+0"
[[GR]]
deps = ["Base64", "DelimitedFiles", "GR_jll", "HTTP", "JSON", "Libdl", "LinearAlgebra", "Pkg", "Printf", "Random", "Serialization", "Sockets", "Test", "UUIDs"]
git-tree-sha1 = "d189c6d2004f63fd3c91748c458b09f26de0efaa"
uuid = "28b8d3ca-fb5f-59d9-8090-bfdbd6d07a71"
version = "0.61.0"
[[GR_jll]]
deps = ["Artifacts", "Bzip2_jll", "Cairo_jll", "FFMPEG_jll", "Fontconfig_jll", "GLFW_jll", "JLLWrappers", "JpegTurbo_jll", "Libdl", "Libtiff_jll", "Pixman_jll", "Pkg", "Qt5Base_jll", "Zlib_jll", "libpng_jll"]
git-tree-sha1 = "cafe0823979a5c9bff86224b3b8de29ea5a44b2e"
uuid = "d2c73de3-f751-5644-a686-071e5b155ba9"
version = "0.61.0+0"
[[GeometryBasics]]
deps = ["EarCut_jll", "IterTools", "LinearAlgebra", "StaticArrays", "StructArrays", "Tables"]
git-tree-sha1 = "58bcdf5ebc057b085e58d95c138725628dd7453c"
uuid = "5c1252a2-5f33-56bf-86c9-59e7332b4326"
version = "0.4.1"
[[Gettext_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Libiconv_jll", "Pkg", "XML2_jll"]
git-tree-sha1 = "9b02998aba7bf074d14de89f9d37ca24a1a0b046"
uuid = "78b55507-aeef-58d4-861c-77aaff3498b1"
version = "0.21.0+0"
[[Glib_jll]]
deps = ["Artifacts", "Gettext_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Libiconv_jll", "Libmount_jll", "PCRE_jll", "Pkg", "Zlib_jll"]
git-tree-sha1 = "7bf67e9a481712b3dbe9cb3dac852dc4b1162e02"
uuid = "7746bdde-850d-59dc-9ae8-88ece973131d"
version = "2.68.3+0"
[[Graphite2_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "344bf40dcab1073aca04aa0df4fb092f920e4011"
uuid = "3b182d85-2403-5c21-9c21-1e1f0cc25472"
version = "1.3.14+0"
[[Grisu]]
git-tree-sha1 = "53bb909d1151e57e2484c3d1b53e19552b887fb2"
uuid = "42e2da0e-8278-4e71-bc24-59509adca0fe"
version = "1.0.2"
[[HTTP]]
deps = ["Base64", "Dates", "IniFile", "Logging", "MbedTLS", "NetworkOptions", "Sockets", "URIs"]
git-tree-sha1 = "14eece7a3308b4d8be910e265c724a6ba51a9798"
uuid = "cd3eb016-35fb-5094-929b-558a96fad6f3"
version = "0.9.16"
[[HarfBuzz_jll]]
deps = ["Artifacts", "Cairo_jll", "Fontconfig_jll", "FreeType2_jll", "Glib_jll", "Graphite2_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Pkg"]
git-tree-sha1 = "8a954fed8ac097d5be04921d595f741115c1b2ad"
uuid = "2e76f6c2-a576-52d4-95c1-20adfe4de566"
version = "2.8.1+0"
[[Hyperscript]]
deps = ["Test"]
git-tree-sha1 = "8d511d5b81240fc8e6802386302675bdf47737b9"
uuid = "47d2ed2b-36de-50cf-bf87-49c2cf4b8b91"
version = "0.0.4"
[[HypertextLiteral]]
git-tree-sha1 = "f6532909bf3d40b308a0f360b6a0e626c0e263a8"
uuid = "ac1192a8-f4b3-4bfe-ba22-af5b92cd3ab2"
version = "0.9.1"
[[IOCapture]]
deps = ["Logging", "Random"]
git-tree-sha1 = "f7be53659ab06ddc986428d3a9dcc95f6fa6705a"
uuid = "b5f81e59-6552-4d32-b1f0-c071b021bf89"
version = "0.2.2"
[[IniFile]]
deps = ["Test"]
git-tree-sha1 = "098e4d2c533924c921f9f9847274f2ad89e018b8"
uuid = "83e8ac13-25f8-5344-8a64-a9f2b223428f"
version = "0.5.0"
[[IntelOpenMP_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "d979e54b71da82f3a65b62553da4fc3d18c9004c"
uuid = "1d5cc7b8-4909-519e-a0f8-d0f5ad9712d0"
version = "2018.0.3+2"
[[InteractiveUtils]]
deps = ["Markdown"]
uuid = "b77e0a4c-d291-57a0-90e8-8db25a27a240"
[[IrrationalConstants]]
git-tree-sha1 = "7fd44fd4ff43fc60815f8e764c0f352b83c49151"
uuid = "92d709cd-6900-40b7-9082-c6be49f344b6"
version = "0.1.1"
[[IterTools]]
git-tree-sha1 = "05110a2ab1fc5f932622ffea2a003221f4782c18"
uuid = "c8e1da08-722c-5040-9ed9-7db0dc04731e"
version = "1.3.0"
[[IteratorInterfaceExtensions]]
git-tree-sha1 = "a3f24677c21f5bbe9d2a714f95dcd58337fb2856"
uuid = "82899510-4779-5014-852e-03e436cf321d"
version = "1.0.0"
[[JLLWrappers]]
deps = ["Preferences"]
git-tree-sha1 = "642a199af8b68253517b80bd3bfd17eb4e84df6e"
uuid = "692b3bcd-3c85-4b1f-b108-f13ce0eb3210"
version = "1.3.0"
[[JSON]]
deps = ["Dates", "Mmap", "Parsers", "Unicode"]
git-tree-sha1 = "8076680b162ada2a031f707ac7b4953e30667a37"
uuid = "682c06a0-de6a-54ab-a142-c8b1cf79cde6"
version = "0.21.2"
[[JpegTurbo_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "d735490ac75c5cb9f1b00d8b5509c11984dc6943"
uuid = "aacddb02-875f-59d6-b918-886e6ef4fbf8"
version = "2.1.0+0"
[[LAME_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "f6250b16881adf048549549fba48b1161acdac8c"
uuid = "c1c5ebd0-6772-5130-a774-d5fcae4a789d"
version = "3.100.1+0"
[[LZO_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "e5b909bcf985c5e2605737d2ce278ed791b89be6"
uuid = "dd4b983a-f0e5-5f8d-a1b7-129d4a5fb1ac"
version = "2.10.1+0"
[[LaTeXStrings]]
git-tree-sha1 = "c7f1c695e06c01b95a67f0cd1d34994f3e7db104"
uuid = "b964fa9f-0449-5b57-a5c2-d3ea65f4040f"
version = "1.2.1"
[[Latexify]]
deps = ["Formatting", "InteractiveUtils", "LaTeXStrings", "MacroTools", "Markdown", "Printf", "Requires"]
git-tree-sha1 = "a4b12a1bd2ebade87891ab7e36fdbce582301a92"
uuid = "23fbe1c1-3f47-55db-b15f-69d7ec21a316"
version = "0.15.6"
[[LazyArtifacts]]
deps = ["Artifacts", "Pkg"]
uuid = "4af54fe1-eca0-43a8-85a7-787d91b784e3"
[[LibCURL]]
deps = ["LibCURL_jll", "MozillaCACerts_jll"]
uuid = "b27032c2-a3e7-50c8-80cd-2d36dbcbfd21"
[[LibCURL_jll]]
deps = ["Artifacts", "LibSSH2_jll", "Libdl", "MbedTLS_jll", "Zlib_jll", "nghttp2_jll"]
uuid = "deac9b47-8bc7-5906-a0fe-35ac56dc84c0"
[[LibGit2]]
deps = ["Base64", "NetworkOptions", "Printf", "SHA"]
uuid = "76f85450-5226-5b5a-8eaa-529ad045b433"
[[LibSSH2_jll]]
deps = ["Artifacts", "Libdl", "MbedTLS_jll"]
uuid = "29816b5a-b9ab-546f-933c-edad1886dfa8"
[[Libdl]]
uuid = "8f399da3-3557-5675-b5ff-fb832c97cbdb"
[[Libffi_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "761a393aeccd6aa92ec3515e428c26bf99575b3b"
uuid = "e9f186c6-92d2-5b65-8a66-fee21dc1b490"
version = "3.2.2+0"
[[Libgcrypt_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Libgpg_error_jll", "Pkg"]
git-tree-sha1 = "64613c82a59c120435c067c2b809fc61cf5166ae"
uuid = "d4300ac3-e22c-5743-9152-c294e39db1e4"
version = "1.8.7+0"
[[Libglvnd_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libX11_jll", "Xorg_libXext_jll"]
git-tree-sha1 = "7739f837d6447403596a75d19ed01fd08d6f56bf"
uuid = "7e76a0d4-f3c7-5321-8279-8d96eeed0f29"
version = "1.3.0+3"
[[Libgpg_error_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "c333716e46366857753e273ce6a69ee0945a6db9"
uuid = "7add5ba3-2f88-524e-9cd5-f83b8a55f7b8"
version = "1.42.0+0"
[[Libiconv_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "42b62845d70a619f063a7da093d995ec8e15e778"
uuid = "94ce4f54-9a6c-5748-9c1c-f9c7231a4531"
version = "1.16.1+1"
[[Libmount_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "9c30530bf0effd46e15e0fdcf2b8636e78cbbd73"
uuid = "4b2f31a3-9ecc-558c-b454-b3730dcb73e9"
version = "2.35.0+0"
[[Libtiff_jll]]
deps = ["Artifacts", "JLLWrappers", "JpegTurbo_jll", "Libdl", "Pkg", "Zlib_jll", "Zstd_jll"]
git-tree-sha1 = "340e257aada13f95f98ee352d316c3bed37c8ab9"
uuid = "89763e89-9b03-5906-acba-b20f662cd828"
version = "4.3.0+0"
[[Libuuid_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "7f3efec06033682db852f8b3bc3c1d2b0a0ab066"
uuid = "38a345b3-de98-5d2b-a5d3-14cd9215e700"
version = "2.36.0+0"
[[LinearAlgebra]]
deps = ["Libdl"]
uuid = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
[[LogExpFunctions]]
deps = ["ChainRulesCore", "DocStringExtensions", "IrrationalConstants", "LinearAlgebra"]
git-tree-sha1 = "34dc30f868e368f8a17b728a1238f3fcda43931a"
uuid = "2ab3a3ac-af41-5b50-aa03-7779005ae688"
version = "0.3.3"
[[Logging]]
uuid = "56ddb016-857b-54e1-b83d-db4d58db5568"
[[MKL_jll]]
deps = ["Artifacts", "IntelOpenMP_jll", "JLLWrappers", "LazyArtifacts", "Libdl", "Pkg"]
git-tree-sha1 = "5455aef09b40e5020e1520f551fa3135040d4ed0"
uuid = "856f044c-d86e-5d09-b602-aeab76dc8ba7"
version = "2021.1.1+2"
[[MacroTools]]
deps = ["Markdown", "Random"]
git-tree-sha1 = "5a5bc6bf062f0f95e62d0fe0a2d99699fed82dd9"
uuid = "1914dd2f-81c6-5fcd-8719-6d5c9610ff09"
version = "0.5.8"
[[Markdown]]
deps = ["Base64"]
uuid = "d6f4376e-aef5-505a-96c1-9c027394607a"
[[MbedTLS]]
deps = ["Dates", "MbedTLS_jll", "Random", "Sockets"]
git-tree-sha1 = "1c38e51c3d08ef2278062ebceade0e46cefc96fe"
uuid = "739be429-bea8-5141-9913-cc70e7f3736d"
version = "1.0.3"
[[MbedTLS_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "c8ffd9c3-330d-5841-b78e-0817d7145fa1"
[[Measures]]
git-tree-sha1 = "e498ddeee6f9fdb4551ce855a46f54dbd900245f"
uuid = "442fdcdd-2543-5da2-b0f3-8c86c306513e"
version = "0.3.1"
[[Missings]]
deps = ["DataAPI"]
git-tree-sha1 = "bf210ce90b6c9eed32d25dbcae1ebc565df2687f"
uuid = "e1d29d7a-bbdc-5cf2-9ac0-f12de2c33e28"
version = "1.0.2"
[[Mmap]]
uuid = "a63ad114-7e13-5084-954f-fe012c677804"
[[MozillaCACerts_jll]]
uuid = "14a3606d-f60d-562e-9121-12d972cd8159"
[[NaNMath]]
git-tree-sha1 = "bfe47e760d60b82b66b61d2d44128b62e3a369fb"
uuid = "77ba4419-2d1f-58cd-9bb1-8ffee604a2e3"
version = "0.3.5"
[[NetworkOptions]]
uuid = "ca575930-c2e3-43a9-ace4-1e988b2c1908"
[[OffsetArrays]]
deps = ["Adapt"]
git-tree-sha1 = "c0e9e582987d36d5a61e650e6e543b9e44d9914b"
uuid = "6fe1bfb0-de20-5000-8ca7-80f57d26f881"
version = "1.10.7"
[[Ogg_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "7937eda4681660b4d6aeeecc2f7e1c81c8ee4e2f"
uuid = "e7412a2a-1a6e-54c0-be00-318e2571c051"
version = "1.3.5+0"
[[OpenSSL_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "15003dcb7d8db3c6c857fda14891a539a8f2705a"
uuid = "458c3c95-2e84-50aa-8efc-19380b2a3a95"
version = "1.1.10+0"
[[Opus_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "51a08fb14ec28da2ec7a927c4337e4332c2a4720"
uuid = "91d4177d-7536-5919-b921-800302f37372"
version = "1.3.2+0"
[[OrderedCollections]]
git-tree-sha1 = "85f8e6578bf1f9ee0d11e7bb1b1456435479d47c"
uuid = "bac558e1-5e72-5ebc-8fee-abe8a469f55d"
version = "1.4.1"
[[PCRE_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "b2a7af664e098055a7529ad1a900ded962bca488"
uuid = "2f80f16e-611a-54ab-bc61-aa92de5b98fc"
version = "8.44.0+0"
[[PaddedViews]]
deps = ["OffsetArrays"]
git-tree-sha1 = "646eed6f6a5d8df6708f15ea7e02a7a2c4fe4800"
uuid = "5432bcbf-9aad-5242-b902-cca2824c8663"
version = "0.5.10"
[[Parsers]]
deps = ["Dates"]
git-tree-sha1 = "a8709b968a1ea6abc2dc1967cb1db6ac9a00dfb6"
uuid = "69de0a69-1ddd-5017-9359-2bf0b02dc9f0"
version = "2.0.5"
[[Pixman_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "b4f5d02549a10e20780a24fce72bea96b6329e29"
uuid = "30392449-352a-5448-841d-b1acce4e97dc"
version = "0.40.1+0"
[[Pkg]]
deps = ["Artifacts", "Dates", "Downloads", "LibGit2", "Libdl", "Logging", "Markdown", "Printf", "REPL", "Random", "SHA", "Serialization", "TOML", "Tar", "UUIDs", "p7zip_jll"]
uuid = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f"
[[PlotThemes]]
deps = ["PlotUtils", "Requires", "Statistics"]
git-tree-sha1 = "a3a964ce9dc7898193536002a6dd892b1b5a6f1d"
uuid = "ccf2f8ad-2431-5c83-bf29-c5338b663b6a"
version = "2.0.1"
[[PlotUtils]]
deps = ["ColorSchemes", "Colors", "Dates", "Printf", "Random", "Reexport", "Statistics"]
git-tree-sha1 = "b084324b4af5a438cd63619fd006614b3b20b87b"
uuid = "995b91a9-d308-5afd-9ec6-746e21dbc043"
version = "1.0.15"
[[Plots]]
deps = ["Base64", "Contour", "Dates", "Downloads", "FFMPEG", "FixedPointNumbers", "GR", "GeometryBasics", "JSON", "Latexify", "LinearAlgebra", "Measures", "NaNMath", "PlotThemes", "PlotUtils", "Printf", "REPL", "Random", "RecipesBase", "RecipesPipeline", "Reexport", "Requires", "Scratch", "Showoff", "SparseArrays", "Statistics", "StatsBase", "UUIDs"]
git-tree-sha1 = "bc70e31a04f22780b57ad399ff94f9b78a1e7b39"
uuid = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
version = "1.22.5"
[[PlutoUI]]
deps = ["Base64", "Dates", "Hyperscript", "HypertextLiteral", "IOCapture", "InteractiveUtils", "JSON", "Logging", "Markdown", "Random", "Reexport", "UUIDs"]
git-tree-sha1 = "4c8a7d080daca18545c56f1cac28710c362478f3"
uuid = "7f904dfe-b85e-4ff6-b463-dae2292396a8"
version = "0.7.16"
[[Preferences]]
deps = ["TOML"]
git-tree-sha1 = "00cfd92944ca9c760982747e9a1d0d5d86ab1e5a"
uuid = "21216c6a-2e73-6563-6e65-726566657250"
version = "1.2.2"
[[Printf]]
deps = ["Unicode"]
uuid = "de0858da-6303-5e67-8744-51eddeeeb8d7"
[[Qt5Base_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "Fontconfig_jll", "Glib_jll", "JLLWrappers", "Libdl", "Libglvnd_jll", "OpenSSL_jll", "Pkg", "Xorg_libXext_jll", "Xorg_libxcb_jll", "Xorg_xcb_util_image_jll", "Xorg_xcb_util_keysyms_jll", "Xorg_xcb_util_renderutil_jll", "Xorg_xcb_util_wm_jll", "Zlib_jll", "xkbcommon_jll"]
git-tree-sha1 = "ad368663a5e20dbb8d6dc2fddeefe4dae0781ae8"
uuid = "ea2cea3b-5b76-57ae-a6ef-0a8af62496e1"
version = "5.15.3+0"
[[REPL]]
deps = ["InteractiveUtils", "Markdown", "Sockets", "Unicode"]
uuid = "3fa0cd96-eef1-5676-8a61-b3b8758bbffb"
[[Random]]
deps = ["Serialization"]
uuid = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
[[RecipesBase]]
git-tree-sha1 = "44a75aa7a527910ee3d1751d1f0e4148698add9e"
uuid = "3cdcf5f2-1ef4-517c-9805-6587b60abb01"
version = "1.1.2"
[[RecipesPipeline]]
deps = ["Dates", "NaNMath", "PlotUtils", "RecipesBase"]
git-tree-sha1 = "7ad0dfa8d03b7bcf8c597f59f5292801730c55b8"
uuid = "01d81517-befc-4cb6-b9ec-a95719d0359c"
version = "0.4.1"
[[Reexport]]
git-tree-sha1 = "45e428421666073eab6f2da5c9d310d99bb12f9b"
uuid = "189a3867-3050-52da-a836-e630ba90ab69"
version = "1.2.2"
[[Requires]]
deps = ["UUIDs"]
git-tree-sha1 = "4036a3bd08ac7e968e27c203d45f5fff15020621"
uuid = "ae029012-a4dd-5104-9daa-d747884805df"
version = "1.1.3"
[[SHA]]
uuid = "ea8e919c-243c-51af-8825-aaa63cd721ce"
[[Scratch]]
deps = ["Dates"]
git-tree-sha1 = "0b4b7f1393cff97c33891da2a0bf69c6ed241fda"
uuid = "6c6a2e73-6563-6170-7368-637461726353"
version = "1.1.0"
[[Serialization]]
uuid = "9e88b42a-f829-5b0c-bbe9-9e923198166b"
[[SharedArrays]]
deps = ["Distributed", "Mmap", "Random", "Serialization"]
uuid = "1a1011a3-84de-559e-8e89-a11a2f7dc383"
[[Showoff]]
deps = ["Dates", "Grisu"]
git-tree-sha1 = "91eddf657aca81df9ae6ceb20b959ae5653ad1de"
uuid = "992d4aef-0814-514b-bc4d-f2e9a6c4116f"
version = "1.0.3"
[[Sockets]]
uuid = "6462fe0b-24de-5631-8697-dd941f90decc"
[[SortingAlgorithms]]
deps = ["DataStructures"]
git-tree-sha1 = "b3363d7460f7d098ca0912c69b082f75625d7508"
uuid = "a2af1166-a08f-5f64-846c-94a0d3cef48c"
version = "1.0.1"
[[SparseArrays]]
deps = ["LinearAlgebra", "Random"]
uuid = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
[[SpeechFeatures]]
deps = ["FFTW", "LinearAlgebra", "PaddedViews"]
git-tree-sha1 = "2a489b75dfb511e6ddfaaaebec85f5df9a2b496d"
uuid = "6f3487c4-5ca2-4050-bfeb-2cf56df92307"
version = "0.3.0"
[[StaticArrays]]
deps = ["LinearAlgebra", "Random", "Statistics"]
git-tree-sha1 = "3c76dde64d03699e074ac02eb2e8ba8254d428da"
uuid = "90137ffa-7385-5640-81b9-e52037218182"
version = "1.2.13"
[[Statistics]]
deps = ["LinearAlgebra", "SparseArrays"]
uuid = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
[[StatsAPI]]
git-tree-sha1 = "1958272568dc176a1d881acb797beb909c785510"
uuid = "82ae8749-77ed-4fe6-ae5f-f523153014b0"
version = "1.0.0"
[[StatsBase]]
deps = ["DataAPI", "DataStructures", "LinearAlgebra", "LogExpFunctions", "Missings", "Printf", "Random", "SortingAlgorithms", "SparseArrays", "Statistics", "StatsAPI"]
git-tree-sha1 = "65fb73045d0e9aaa39ea9a29a5e7506d9ef6511f"
uuid = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
version = "0.33.11"
[[StructArrays]]
deps = ["Adapt", "DataAPI", "StaticArrays", "Tables"]
git-tree-sha1 = "2ce41e0d042c60ecd131e9fb7154a3bfadbf50d3"
uuid = "09ab397b-f2b6-538f-b94a-2f83cf4a842a"
version = "0.6.3"
[[TOML]]
deps = ["Dates"]
uuid = "fa267f1f-6049-4f14-aa54-33bafae1ed76"
[[TableTraits]]
deps = ["IteratorInterfaceExtensions"]
git-tree-sha1 = "c06b2f539df1c6efa794486abfb6ed2022561a39"
uuid = "3783bdb8-4a98-5b6b-af9a-565f29a5fe9c"
version = "1.0.1"
[[Tables]]
deps = ["DataAPI", "DataValueInterfaces", "IteratorInterfaceExtensions", "LinearAlgebra", "TableTraits", "Test"]
git-tree-sha1 = "fed34d0e71b91734bf0a7e10eb1bb05296ddbcd0"
uuid = "bd369af6-aec1-5ad0-b16a-f7cc5008161c"
version = "1.6.0"
[[Tar]]
deps = ["ArgTools", "SHA"]
uuid = "a4e569a6-e804-4fa4-b0f3-eef7a1d5b13e"
[[Test]]
deps = ["InteractiveUtils", "Logging", "Random", "Serialization"]
uuid = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
[[URIs]]
git-tree-sha1 = "97bbe755a53fe859669cd907f2d96aee8d2c1355"
uuid = "5c2747f8-b7ea-4ff2-ba2e-563bfd36b1d4"
version = "1.3.0"
[[UUIDs]]
deps = ["Random", "SHA"]
uuid = "cf7118a7-6976-5b1a-9a39-7adc72f591a4"
[[Unicode]]
uuid = "4ec0a83e-493e-50e2-b9ac-8f72acf5a8f5"
[[WAV]]
deps = ["Base64", "FileIO", "Libdl", "Logging"]
git-tree-sha1 = "1d5dc6568ab6b2846efd10cc4d070bb6be73a6b8"
uuid = "8149f6b0-98f6-5db9-b78f-408fbbb8ef88"
version = "1.1.1"
[[Wayland_jll]]
deps = ["Artifacts", "Expat_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Pkg", "XML2_jll"]
git-tree-sha1 = "3e61f0b86f90dacb0bc0e73a0c5a83f6a8636e23"
uuid = "a2964d1f-97da-50d4-b82a-358c7fce9d89"
version = "1.19.0+0"
[[Wayland_protocols_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Wayland_jll"]
git-tree-sha1 = "2839f1c1296940218e35df0bbb220f2a79686670"
uuid = "2381bf8a-dfd0-557d-9999-79630e7b1b91"
version = "1.18.0+4"
[[XML2_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Libiconv_jll", "Pkg", "Zlib_jll"]
git-tree-sha1 = "1acf5bdf07aa0907e0a37d3718bb88d4b687b74a"
uuid = "02c8fc9c-b97f-50b9-bbe4-9be30ff0a78a"
version = "2.9.12+0"
[[XSLT_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Libgcrypt_jll", "Libgpg_error_jll", "Libiconv_jll", "Pkg", "XML2_jll", "Zlib_jll"]
git-tree-sha1 = "91844873c4085240b95e795f692c4cec4d805f8a"
uuid = "aed1982a-8fda-507f-9586-7b0439959a61"
version = "1.1.34+0"
[[Xorg_libX11_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libxcb_jll", "Xorg_xtrans_jll"]
git-tree-sha1 = "5be649d550f3f4b95308bf0183b82e2582876527"
uuid = "4f6342f7-b3d2-589e-9d20-edeb45f2b2bc"
version = "1.6.9+4"
[[Xorg_libXau_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "4e490d5c960c314f33885790ed410ff3a94ce67e"
uuid = "0c0b7dd1-d40b-584c-a123-a41640f87eec"
version = "1.0.9+4"
[[Xorg_libXcursor_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libXfixes_jll", "Xorg_libXrender_jll"]
git-tree-sha1 = "12e0eb3bc634fa2080c1c37fccf56f7c22989afd"
uuid = "935fb764-8cf2-53bf-bb30-45bb1f8bf724"
version = "1.2.0+4"
[[Xorg_libXdmcp_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "4fe47bd2247248125c428978740e18a681372dd4"
uuid = "a3789734-cfe1-5b06-b2d0-1dd0d9d62d05"
version = "1.1.3+4"
[[Xorg_libXext_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libX11_jll"]
git-tree-sha1 = "b7c0aa8c376b31e4852b360222848637f481f8c3"
uuid = "1082639a-0dae-5f34-9b06-72781eeb8cb3"
version = "1.3.4+4"
[[Xorg_libXfixes_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libX11_jll"]
git-tree-sha1 = "0e0dc7431e7a0587559f9294aeec269471c991a4"
uuid = "d091e8ba-531a-589c-9de9-94069b037ed8"
version = "5.0.3+4"
[[Xorg_libXi_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libXext_jll", "Xorg_libXfixes_jll"]
git-tree-sha1 = "89b52bc2160aadc84d707093930ef0bffa641246"
uuid = "a51aa0fd-4e3c-5386-b890-e753decda492"
version = "1.7.10+4"
[[Xorg_libXinerama_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libXext_jll"]
git-tree-sha1 = "26be8b1c342929259317d8b9f7b53bf2bb73b123"
uuid = "d1454406-59df-5ea1-beac-c340f2130bc3"
version = "1.1.4+4"
[[Xorg_libXrandr_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libXext_jll", "Xorg_libXrender_jll"]
git-tree-sha1 = "34cea83cb726fb58f325887bf0612c6b3fb17631"
uuid = "ec84b674-ba8e-5d96-8ba1-2a689ba10484"
version = "1.5.2+4"
[[Xorg_libXrender_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libX11_jll"]
git-tree-sha1 = "19560f30fd49f4d4efbe7002a1037f8c43d43b96"
uuid = "ea2f1a96-1ddc-540d-b46f-429655e07cfa"
version = "0.9.10+4"
[[Xorg_libpthread_stubs_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "6783737e45d3c59a4a4c4091f5f88cdcf0908cbb"
uuid = "14d82f49-176c-5ed1-bb49-ad3f5cbd8c74"
version = "0.1.0+3"
[[Xorg_libxcb_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "XSLT_jll", "Xorg_libXau_jll", "Xorg_libXdmcp_jll", "Xorg_libpthread_stubs_jll"]
git-tree-sha1 = "daf17f441228e7a3833846cd048892861cff16d6"
uuid = "c7cfdc94-dc32-55de-ac96-5a1b8d977c5b"
version = "1.13.0+3"
[[Xorg_libxkbfile_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libX11_jll"]
git-tree-sha1 = "926af861744212db0eb001d9e40b5d16292080b2"
uuid = "cc61e674-0454-545c-8b26-ed2c68acab7a"
version = "1.1.0+4"
[[Xorg_xcb_util_image_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_xcb_util_jll"]
git-tree-sha1 = "0fab0a40349ba1cba2c1da699243396ff8e94b97"
uuid = "12413925-8142-5f55-bb0e-6d7ca50bb09b"
version = "0.4.0+1"
[[Xorg_xcb_util_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libxcb_jll"]
git-tree-sha1 = "e7fd7b2881fa2eaa72717420894d3938177862d1"
uuid = "2def613f-5ad1-5310-b15b-b15d46f528f5"
version = "0.4.0+1"
[[Xorg_xcb_util_keysyms_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_xcb_util_jll"]
git-tree-sha1 = "d1151e2c45a544f32441a567d1690e701ec89b00"
uuid = "975044d2-76e6-5fbe-bf08-97ce7c6574c7"
version = "0.4.0+1"
[[Xorg_xcb_util_renderutil_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_xcb_util_jll"]
git-tree-sha1 = "dfd7a8f38d4613b6a575253b3174dd991ca6183e"
uuid = "0d47668e-0667-5a69-a72c-f761630bfb7e"
version = "0.3.9+1"
[[Xorg_xcb_util_wm_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_xcb_util_jll"]
git-tree-sha1 = "e78d10aab01a4a154142c5006ed44fd9e8e31b67"
uuid = "c22f9ab0-d5fe-5066-847c-f4bb1cd4e361"
version = "0.4.1+1"
[[Xorg_xkbcomp_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libxkbfile_jll"]
git-tree-sha1 = "4bcbf660f6c2e714f87e960a171b119d06ee163b"
uuid = "35661453-b289-5fab-8a00-3d9160c6a3a4"
version = "1.4.2+4"
[[Xorg_xkeyboard_config_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_xkbcomp_jll"]
git-tree-sha1 = "5c8424f8a67c3f2209646d4425f3d415fee5931d"
uuid = "33bec58e-1273-512f-9401-5d533626f822"
version = "2.27.0+4"
[[Xorg_xtrans_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "79c31e7844f6ecf779705fbc12146eb190b7d845"
uuid = "c5fb5394-a638-5e4d-96e5-b29de1b5cf10"
version = "1.4.0+3"
[[Zlib_jll]]
deps = ["Libdl"]
uuid = "83775a58-1f1d-513f-b197-d71354ab007a"
[[Zstd_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "cc4bf3fdde8b7e3e9fa0351bdeedba1cf3b7f6e6"
uuid = "3161d3a3-bdf6-5164-811a-617609db77b4"
version = "1.5.0+0"
[[libass_jll]]
deps = ["Artifacts", "Bzip2_jll", "FreeType2_jll", "FriBidi_jll", "HarfBuzz_jll", "JLLWrappers", "Libdl", "Pkg", "Zlib_jll"]
git-tree-sha1 = "5982a94fcba20f02f42ace44b9894ee2b140fe47"
uuid = "0ac62f75-1d6f-5e53-bd7c-93b484bb37c0"
version = "0.15.1+0"
[[libfdk_aac_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "daacc84a041563f965be61859a36e17c4e4fcd55"
uuid = "f638f0a6-7fb0-5443-88ba-1cc74229b280"
version = "2.0.2+0"
[[libpng_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Zlib_jll"]
git-tree-sha1 = "94d180a6d2b5e55e447e2d27a29ed04fe79eb30c"
uuid = "b53b4c65-9356-5827-b1ea-8c7a1a84506f"
version = "1.6.38+0"
[[libvorbis_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Ogg_jll", "Pkg"]
git-tree-sha1 = "c45f4e40e7aafe9d086379e5578947ec8b95a8fb"
uuid = "f27f6e37-5d2b-51aa-960f-b287f2bc3b7a"
version = "1.3.7+0"
[[nghttp2_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "8e850ede-7688-5339-a07c-302acd2aaf8d"
[[p7zip_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "3f19e933-33d8-53b3-aaab-bd5110c3b7a0"
[[x264_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "4fea590b89e6ec504593146bf8b988b2c00922b2"
uuid = "1270edf5-f2f9-52d2-97e9-ab00b5d0237a"
version = "2021.5.5+0"
[[x265_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "ee567a171cce03570d77ad3a43e90218e38937a9"
uuid = "dfaa095f-4041-5dcd-9319-2fabd8486b76"
version = "3.5.0+0"
[[xkbcommon_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Wayland_jll", "Wayland_protocols_jll", "Xorg_libxcb_jll", "Xorg_xkeyboard_config_jll"]
git-tree-sha1 = "ece2350174195bb31de1a63bea3a41ae1aa593b6"
uuid = "d8fb68d0-12a3-5cfd-a85a-d49703b185fd"
version = "0.9.1+5"
"""
# ╔═╡ Cell order:
# ╟─f87589e3-c5d7-41b5-b376-bcf9eec006d1
# ╟─d9026f53-756d-4862-a258-f9663a9a76a2
# ╠═75ae3354-2aaa-11ec-1805-d1efd04acf08
# ╠═319b69f9-6c9d-4d22-9896-055800cf5de8
# ╟─844d4433-bc74-472b-9723-d4136bf56f0f
# ╠═86a58676-7f23-4e45-8ffb-0413e00e3237
# ╟─f6647baa-e24a-4c67-9c1c-ae95cd9239e4
# ╠═4cd9e50b-6e12-48e0-812d-00af1598b32c
# ╟─f2227028-3926-4864-9330-33cacc6349be
# ╠═ab6e2ce4-5941-4441-ae1d-7417a9b2b84e
# ╟─786d833c-4a58-48d3-9e6e-b7869fd02a2e
# ╠═8d116895-703f-4fd5-a3a9-aa8925ef7461
# ╟─8daea702-d679-4ef0-96d5-230f597889a6
# ╠═db90b23f-d363-432d-a2e2-5772bf1657ba
# ╠═0a9c2db4-bd6e-42e5-874f-28f75b5385c5
# ╟─0a2780df-8fee-4b27-a944-3e0c7f2aa053
# ╠═da662210-d760-4989-b6c3-99c58395514f
# ╟─8bbd7a37-c714-4f64-81d0-48a18717336b
# ╠═70ef4159-f09a-4e2d-a266-c86972a6a611
# ╠═ce94d7c3-5814-4805-a5c5-bf6e56c412ff
# ╟─b457c84c-50aa-43aa-84d6-d38cff22883b
# ╠═2623d5d5-ba06-4247-8929-5d98d8c65c89
# ╠═65e7fb61-4fb5-4826-8487-2c613b782773
# ╟─045b825e-47ed-462f-912d-3954812651a8
# ╠═de44637f-2f24-4a5a-b1f3-1f5dd90f85af
# ╠═8f876646-f9bf-489a-8002-607d38eee4e9
# ╟─d49fd625-0415-44b4-94a9-94d2780aa0c3
# ╟─dfa1065d-29ad-443e-930c-33ae740652d7
# ╠═b1b54b9e-bca0-4e65-901d-6cf690331e2c
# ╠═9fd33feb-e0c2-45fc-80a2-baa9fe9bbcd3
# ╠═717bf350-954e-4e33-95ef-063c89fe90ae
# ╟─19148014-f27e-4821-946e-fb68345a7641
# ╠═f283f94f-993a-4156-b606-8014aae341ca
# ╟─95b9d153-4934-45d8-b9f3-138d93757bfb
# ╠═f4f33068-88f2-4b23-bb7b-47abc9e34bac
# ╟─e2b3d74c-9199-4c03-8405-fb19f171fd05
# ╠═d4cb69e2-fd46-4bc6-ae1b-8e041e015f76
# ╟─d18010a6-6ac5-493f-92fc-590bf6bd6fe3
# ╠═3ae4936f-faa9-45ac-90cd-c2a1bc782550
# ╟─831accf5-fa88-492c-9a9e-6e7e58a6ce91
# ╠═68ef4c2f-eb3c-458f-96ad-a301754bc469
# ╠═013a0118-8181-461e-bc8f-fb6479787383
# ╟─6c94c82f-837b-4bcc-8db8-79ad8a0382d4
# ╠═5de5c24d-5407-4001-96a1-21094719c65f
# ╟─6398bf0b-295e-4c6d-a9fa-0df8c1bd2807
# ╠═20c2333c-f368-4077-86ef-794e849adb0a
# ╠═406bce61-409c-4d3e-8d50-930f4b48387b
# ╟─df2ee608-be7f-44d6-bab8-41a67fbe9e48
# ╠═39b9c15e-8459-45a3-b4de-9bada6203580
# ╠═f19ba4e7-8625-43b9-a989-95e3f7ab1825
# ╟─664a2a9b-d12a-4230-b0bb-c4eb32dbd253
# ╠═07c8b5e6-0f95-49ea-8fb8-1aa24c6bc11c
# ╠═82550777-784f-4c97-86e2-1e0bad53f9ae
# ╟─b0f01b5a-40a8-4bc1-bab6-ad9ea1daff73
# ╠═ce9a8367-48d0-4947-8499-50b674d763ea
# ╠═d2c573f1-58b2-4104-b619-56cfbb522063
# ╠═057c98a6-9878-4d51-aede-f77603af7e16
# ╟─09528ac5-ffda-4d0a-b7ce-522722593644
# ╠═9c9d7293-68c7-4d66-bea0-1743019bf9dc
# ╠═f97d744f-be53-42a4-8800-e83d4440b0e6
# ╠═857e1fa9-6997-4c96-90fa-ae0fbb9e8cc2
# ╠═83af226d-f60f-461e-8c28-835160d5c270
# ╠═b8180e1c-d698-44ec-9372-a7d8f133b3f1
# ╟─0582ef6f-dcd2-42c2-a1bd-8aac011cf166
# ╠═09ede491-cb56-4327-b2e0-6e10b3a5483d
# ╠═9c0fd83e-9217-4516-a4e6-9566a7e78b31
# ╠═af564d77-dc11-4125-bde3-1f07c4521937
# ╟─00000000-0000-0000-0000-000000000001
# ╟─00000000-0000-0000-0000-000000000002
| SpeechFeatures | https://github.com/lucasondel/SpeechFeatures.jl.git |
|
[
"MIT"
] | 0.3.2 | f1980bf82ef171e8de32482397ab786303af3b15 | code | 33587 | ### A Pluto.jl notebook ###
# v0.16.1
using Markdown
using InteractiveUtils
# ╔═╡ 3d4495cd-3107-4855-8a57-b4154f7af653
begin
using SpeechFeatures
using WAV
using Plots
end
# ╔═╡ 9a0f7ed6-2b6c-11ec-23c2-7fd783169875
md"""
# SpeechFeatures tutorial
*[Lucas Ondel](https://lucasondel.github.io/), October 2021*
This is notebook shows how to use the [SpeechFeatures](https://github.com/lucasondel/SpeechFeatures.jl) Julia package.
"""
# ╔═╡ 160a988a-20cb-4ff6-a6a6-7ace16f4a97a
md"""
## Loading the data
We will work with a sample from the TIMIT corpus freely available on the LDC website. First, we download it in the directory of this notebook.
"""
# ╔═╡ 332e11ed-69ed-41a3-a1f1-4f718efdffa1
wavpath = download("https://catalog.ldc.upenn.edu/desc/addenda/LDC93S1.wav",
joinpath(@__DIR__, "sample.wav"))
# ╔═╡ cf2a2daa-3334-45d3-83fb-132c0c211d96
channels, fs = wavread(wavpath, format="double")
# ╔═╡ 4f43424d-4d92-4a47-8792-cf0e3a448292
x = channels[:,1]
# ╔═╡ baaed408-18f7-4648-a04c-99c58abd907f
md"""
## Short-Term Fourier Transform
The first step of any speech features is to get the FFT of overlapping frames.
"""
# ╔═╡ bf913df5-348a-4a3e-8c96-e52dcceceb43
S, fftlen = stft(x; srate=fs)
# ╔═╡ 2d9db55a-4826-4173-a3b2-998187803ad2
heatmap(log.(abs.(S)))
# ╔═╡ 83d7af4e-dddf-4621-81fb-4a5e11c1dcf0
md"""
## Mel-spectrum
To get the mel-spectrum we create the filter bank and apply on the stft.
"""
# ╔═╡ fe42604a-b264-43ec-abab-135fd13c326c
fbank = FilterBank(26; fftlen)
# ╔═╡ 303a712f-7048-4a55-a938-e92c0d396bfd
melspectrum = fbank * abs.(S)
# ╔═╡ 575d5378-6fc1-4334-8975-913288136aa5
heatmap(log.(melspectrum))
# ╔═╡ d399c618-95e1-43c3-a9f4-1ce75e41f215
md"""
## MFCC
The mel-cepstral coefficients can be obtained from the mel-cepstrum by calling `mfcc`
"""
# ╔═╡ 8b3961a5-954d-4b05-8701-f21bed104d7a
MFCCs = mfcc(melspectrum; nceps=13)
# ╔═╡ e705a092-6e9d-45c8-9194-dbbbd2fa61b7
heatmap(MFCCs)
# ╔═╡ cc5a1be6-88bc-421a-a064-5f3bfa16e21d
md"""
## Derivatives
Finally, to get the first and second derivatives use `add_deltas`.
"""
# ╔═╡ 1031d2f5-18ec-44fc-b10f-640954991b31
MFCCs_Δ_ΔΔ = add_deltas(MFCCs; order=2)
# ╔═╡ 0d535c94-4a2f-4227-9c62-d757f94f581e
size(MFCCs_Δ_ΔΔ)
# ╔═╡ 84c10764-0d44-43d0-8c31-e34f33df5496
heatmap(MFCCs_Δ_ΔΔ)
# ╔═╡ 00000000-0000-0000-0000-000000000001
PLUTO_PROJECT_TOML_CONTENTS = """
[deps]
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
SpeechFeatures = "6f3487c4-5ca2-4050-bfeb-2cf56df92307"
WAV = "8149f6b0-98f6-5db9-b78f-408fbbb8ef88"
[compat]
Plots = "~1.22.5"
SpeechFeatures = "~0.3.1"
WAV = "~1.1.1"
"""
# ╔═╡ 00000000-0000-0000-0000-000000000002
PLUTO_MANIFEST_TOML_CONTENTS = """
# This file is machine-generated - editing it directly is not advised
[[AbstractFFTs]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "485ee0867925449198280d4af84bdb46a2a404d0"
uuid = "621f4979-c628-5d54-868e-fcf4e3e8185c"
version = "1.0.1"
[[Adapt]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "84918055d15b3114ede17ac6a7182f68870c16f7"
uuid = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
version = "3.3.1"
[[ArgTools]]
uuid = "0dad84c5-d112-42e6-8d28-ef12dabb789f"
[[Artifacts]]
uuid = "56f22d72-fd6d-98f1-02f0-08ddc0907c33"
[[Base64]]
uuid = "2a0f44e3-6c83-55bd-87e4-b1978d98bd5f"
[[Bzip2_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "19a35467a82e236ff51bc17a3a44b69ef35185a2"
uuid = "6e34b625-4abd-537c-b88f-471c36dfa7a0"
version = "1.0.8+0"
[[Cairo_jll]]
deps = ["Artifacts", "Bzip2_jll", "Fontconfig_jll", "FreeType2_jll", "Glib_jll", "JLLWrappers", "LZO_jll", "Libdl", "Pixman_jll", "Pkg", "Xorg_libXext_jll", "Xorg_libXrender_jll", "Zlib_jll", "libpng_jll"]
git-tree-sha1 = "f2202b55d816427cd385a9a4f3ffb226bee80f99"
uuid = "83423d85-b0ee-5818-9007-b63ccbeb887a"
version = "1.16.1+0"
[[ChainRulesCore]]
deps = ["Compat", "LinearAlgebra", "SparseArrays"]
git-tree-sha1 = "74e8234fb738c45e2af55fdbcd9bfbe00c2446fa"
uuid = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
version = "1.8.0"
[[ColorSchemes]]
deps = ["ColorTypes", "Colors", "FixedPointNumbers", "Random"]
git-tree-sha1 = "a851fec56cb73cfdf43762999ec72eff5b86882a"
uuid = "35d6a980-a343-548e-a6ea-1d62b119f2f4"
version = "3.15.0"
[[ColorTypes]]
deps = ["FixedPointNumbers", "Random"]
git-tree-sha1 = "024fe24d83e4a5bf5fc80501a314ce0d1aa35597"
uuid = "3da002f7-5984-5a60-b8a6-cbb66c0b333f"
version = "0.11.0"
[[Colors]]
deps = ["ColorTypes", "FixedPointNumbers", "Reexport"]
git-tree-sha1 = "417b0ed7b8b838aa6ca0a87aadf1bb9eb111ce40"
uuid = "5ae59095-9a9b-59fe-a467-6f913c188581"
version = "0.12.8"
[[Compat]]
deps = ["Base64", "Dates", "DelimitedFiles", "Distributed", "InteractiveUtils", "LibGit2", "Libdl", "LinearAlgebra", "Markdown", "Mmap", "Pkg", "Printf", "REPL", "Random", "SHA", "Serialization", "SharedArrays", "Sockets", "SparseArrays", "Statistics", "Test", "UUIDs", "Unicode"]
git-tree-sha1 = "31d0151f5716b655421d9d75b7fa74cc4e744df2"
uuid = "34da2185-b29b-5c13-b0c7-acf172513d20"
version = "3.39.0"
[[CompilerSupportLibraries_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "e66e0078-7015-5450-92f7-15fbd957f2ae"
[[Contour]]
deps = ["StaticArrays"]
git-tree-sha1 = "9f02045d934dc030edad45944ea80dbd1f0ebea7"
uuid = "d38c429a-6771-53c6-b99e-75d170b6e991"
version = "0.5.7"
[[DataAPI]]
git-tree-sha1 = "cc70b17275652eb47bc9e5f81635981f13cea5c8"
uuid = "9a962f9c-6df0-11e9-0e5d-c546b8b5ee8a"
version = "1.9.0"
[[DataStructures]]
deps = ["Compat", "InteractiveUtils", "OrderedCollections"]
git-tree-sha1 = "7d9d316f04214f7efdbb6398d545446e246eff02"
uuid = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
version = "0.18.10"
[[DataValueInterfaces]]
git-tree-sha1 = "bfc1187b79289637fa0ef6d4436ebdfe6905cbd6"
uuid = "e2d170a0-9d28-54be-80f0-106bbe20a464"
version = "1.0.0"
[[Dates]]
deps = ["Printf"]
uuid = "ade2ca70-3891-5945-98fb-dc099432e06a"
[[DelimitedFiles]]
deps = ["Mmap"]
uuid = "8bb1440f-4735-579b-a4ab-409b98df4dab"
[[Distributed]]
deps = ["Random", "Serialization", "Sockets"]
uuid = "8ba89e20-285c-5b6f-9357-94700520ee1b"
[[DocStringExtensions]]
deps = ["LibGit2"]
git-tree-sha1 = "a32185f5428d3986f47c2ab78b1f216d5e6cc96f"
uuid = "ffbed154-4ef7-542d-bbb7-c09d3a79fcae"
version = "0.8.5"
[[Downloads]]
deps = ["ArgTools", "LibCURL", "NetworkOptions"]
uuid = "f43a241f-c20a-4ad4-852c-f6b1247861c6"
[[EarCut_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "3f3a2501fa7236e9b911e0f7a588c657e822bb6d"
uuid = "5ae413db-bbd1-5e63-b57d-d24a61df00f5"
version = "2.2.3+0"
[[Expat_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "b3bfd02e98aedfa5cf885665493c5598c350cd2f"
uuid = "2e619515-83b5-522b-bb60-26c02a35a201"
version = "2.2.10+0"
[[FFMPEG]]
deps = ["FFMPEG_jll"]
git-tree-sha1 = "b57e3acbe22f8484b4b5ff66a7499717fe1a9cc8"
uuid = "c87230d0-a227-11e9-1b43-d7ebe4e7570a"
version = "0.4.1"
[[FFMPEG_jll]]
deps = ["Artifacts", "Bzip2_jll", "FreeType2_jll", "FriBidi_jll", "JLLWrappers", "LAME_jll", "Libdl", "Ogg_jll", "OpenSSL_jll", "Opus_jll", "Pkg", "Zlib_jll", "libass_jll", "libfdk_aac_jll", "libvorbis_jll", "x264_jll", "x265_jll"]
git-tree-sha1 = "d8a578692e3077ac998b50c0217dfd67f21d1e5f"
uuid = "b22a6f82-2f65-5046-a5b2-351ab43fb4e5"
version = "4.4.0+0"
[[FFTW]]
deps = ["AbstractFFTs", "FFTW_jll", "LinearAlgebra", "MKL_jll", "Preferences", "Reexport"]
git-tree-sha1 = "463cb335fa22c4ebacfd1faba5fde14edb80d96c"
uuid = "7a1cc6ca-52ef-59f5-83cd-3a7055c09341"
version = "1.4.5"
[[FFTW_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "c6033cc3892d0ef5bb9cd29b7f2f0331ea5184ea"
uuid = "f5851436-0d7a-5f13-b9de-f02708fd171a"
version = "3.3.10+0"
[[FileIO]]
deps = ["Pkg", "Requires", "UUIDs"]
git-tree-sha1 = "3c041d2ac0a52a12a27af2782b34900d9c3ee68c"
uuid = "5789e2e9-d7fb-5bc7-8068-2c6fae9b9549"
version = "1.11.1"
[[FixedPointNumbers]]
deps = ["Statistics"]
git-tree-sha1 = "335bfdceacc84c5cdf16aadc768aa5ddfc5383cc"
uuid = "53c48c17-4a7d-5ca2-90c5-79b7896eea93"
version = "0.8.4"
[[Fontconfig_jll]]
deps = ["Artifacts", "Bzip2_jll", "Expat_jll", "FreeType2_jll", "JLLWrappers", "Libdl", "Libuuid_jll", "Pkg", "Zlib_jll"]
git-tree-sha1 = "21efd19106a55620a188615da6d3d06cd7f6ee03"
uuid = "a3f928ae-7b40-5064-980b-68af3947d34b"
version = "2.13.93+0"
[[Formatting]]
deps = ["Printf"]
git-tree-sha1 = "8339d61043228fdd3eb658d86c926cb282ae72a8"
uuid = "59287772-0a20-5a39-b81b-1366585eb4c0"
version = "0.4.2"
[[FreeType2_jll]]
deps = ["Artifacts", "Bzip2_jll", "JLLWrappers", "Libdl", "Pkg", "Zlib_jll"]
git-tree-sha1 = "87eb71354d8ec1a96d4a7636bd57a7347dde3ef9"
uuid = "d7e528f0-a631-5988-bf34-fe36492bcfd7"
version = "2.10.4+0"
[[FriBidi_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "aa31987c2ba8704e23c6c8ba8a4f769d5d7e4f91"
uuid = "559328eb-81f9-559d-9380-de523a88c83c"
version = "1.0.10+0"
[[GLFW_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Libglvnd_jll", "Pkg", "Xorg_libXcursor_jll", "Xorg_libXi_jll", "Xorg_libXinerama_jll", "Xorg_libXrandr_jll"]
git-tree-sha1 = "dba1e8614e98949abfa60480b13653813d8f0157"
uuid = "0656b61e-2033-5cc2-a64a-77c0f6c09b89"
version = "3.3.5+0"
[[GR]]
deps = ["Base64", "DelimitedFiles", "GR_jll", "HTTP", "JSON", "Libdl", "LinearAlgebra", "Pkg", "Printf", "Random", "Serialization", "Sockets", "Test", "UUIDs"]
git-tree-sha1 = "d189c6d2004f63fd3c91748c458b09f26de0efaa"
uuid = "28b8d3ca-fb5f-59d9-8090-bfdbd6d07a71"
version = "0.61.0"
[[GR_jll]]
deps = ["Artifacts", "Bzip2_jll", "Cairo_jll", "FFMPEG_jll", "Fontconfig_jll", "GLFW_jll", "JLLWrappers", "JpegTurbo_jll", "Libdl", "Libtiff_jll", "Pixman_jll", "Pkg", "Qt5Base_jll", "Zlib_jll", "libpng_jll"]
git-tree-sha1 = "cafe0823979a5c9bff86224b3b8de29ea5a44b2e"
uuid = "d2c73de3-f751-5644-a686-071e5b155ba9"
version = "0.61.0+0"
[[GeometryBasics]]
deps = ["EarCut_jll", "IterTools", "LinearAlgebra", "StaticArrays", "StructArrays", "Tables"]
git-tree-sha1 = "58bcdf5ebc057b085e58d95c138725628dd7453c"
uuid = "5c1252a2-5f33-56bf-86c9-59e7332b4326"
version = "0.4.1"
[[Gettext_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Libiconv_jll", "Pkg", "XML2_jll"]
git-tree-sha1 = "9b02998aba7bf074d14de89f9d37ca24a1a0b046"
uuid = "78b55507-aeef-58d4-861c-77aaff3498b1"
version = "0.21.0+0"
[[Glib_jll]]
deps = ["Artifacts", "Gettext_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Libiconv_jll", "Libmount_jll", "PCRE_jll", "Pkg", "Zlib_jll"]
git-tree-sha1 = "7bf67e9a481712b3dbe9cb3dac852dc4b1162e02"
uuid = "7746bdde-850d-59dc-9ae8-88ece973131d"
version = "2.68.3+0"
[[Graphite2_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "344bf40dcab1073aca04aa0df4fb092f920e4011"
uuid = "3b182d85-2403-5c21-9c21-1e1f0cc25472"
version = "1.3.14+0"
[[Grisu]]
git-tree-sha1 = "53bb909d1151e57e2484c3d1b53e19552b887fb2"
uuid = "42e2da0e-8278-4e71-bc24-59509adca0fe"
version = "1.0.2"
[[HTTP]]
deps = ["Base64", "Dates", "IniFile", "Logging", "MbedTLS", "NetworkOptions", "Sockets", "URIs"]
git-tree-sha1 = "14eece7a3308b4d8be910e265c724a6ba51a9798"
uuid = "cd3eb016-35fb-5094-929b-558a96fad6f3"
version = "0.9.16"
[[HarfBuzz_jll]]
deps = ["Artifacts", "Cairo_jll", "Fontconfig_jll", "FreeType2_jll", "Glib_jll", "Graphite2_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Pkg"]
git-tree-sha1 = "8a954fed8ac097d5be04921d595f741115c1b2ad"
uuid = "2e76f6c2-a576-52d4-95c1-20adfe4de566"
version = "2.8.1+0"
[[IniFile]]
deps = ["Test"]
git-tree-sha1 = "098e4d2c533924c921f9f9847274f2ad89e018b8"
uuid = "83e8ac13-25f8-5344-8a64-a9f2b223428f"
version = "0.5.0"
[[IntelOpenMP_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "d979e54b71da82f3a65b62553da4fc3d18c9004c"
uuid = "1d5cc7b8-4909-519e-a0f8-d0f5ad9712d0"
version = "2018.0.3+2"
[[InteractiveUtils]]
deps = ["Markdown"]
uuid = "b77e0a4c-d291-57a0-90e8-8db25a27a240"
[[IrrationalConstants]]
git-tree-sha1 = "7fd44fd4ff43fc60815f8e764c0f352b83c49151"
uuid = "92d709cd-6900-40b7-9082-c6be49f344b6"
version = "0.1.1"
[[IterTools]]
git-tree-sha1 = "05110a2ab1fc5f932622ffea2a003221f4782c18"
uuid = "c8e1da08-722c-5040-9ed9-7db0dc04731e"
version = "1.3.0"
[[IteratorInterfaceExtensions]]
git-tree-sha1 = "a3f24677c21f5bbe9d2a714f95dcd58337fb2856"
uuid = "82899510-4779-5014-852e-03e436cf321d"
version = "1.0.0"
[[JLLWrappers]]
deps = ["Preferences"]
git-tree-sha1 = "642a199af8b68253517b80bd3bfd17eb4e84df6e"
uuid = "692b3bcd-3c85-4b1f-b108-f13ce0eb3210"
version = "1.3.0"
[[JSON]]
deps = ["Dates", "Mmap", "Parsers", "Unicode"]
git-tree-sha1 = "8076680b162ada2a031f707ac7b4953e30667a37"
uuid = "682c06a0-de6a-54ab-a142-c8b1cf79cde6"
version = "0.21.2"
[[JpegTurbo_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "d735490ac75c5cb9f1b00d8b5509c11984dc6943"
uuid = "aacddb02-875f-59d6-b918-886e6ef4fbf8"
version = "2.1.0+0"
[[LAME_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "f6250b16881adf048549549fba48b1161acdac8c"
uuid = "c1c5ebd0-6772-5130-a774-d5fcae4a789d"
version = "3.100.1+0"
[[LZO_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "e5b909bcf985c5e2605737d2ce278ed791b89be6"
uuid = "dd4b983a-f0e5-5f8d-a1b7-129d4a5fb1ac"
version = "2.10.1+0"
[[LaTeXStrings]]
git-tree-sha1 = "c7f1c695e06c01b95a67f0cd1d34994f3e7db104"
uuid = "b964fa9f-0449-5b57-a5c2-d3ea65f4040f"
version = "1.2.1"
[[Latexify]]
deps = ["Formatting", "InteractiveUtils", "LaTeXStrings", "MacroTools", "Markdown", "Printf", "Requires"]
git-tree-sha1 = "a4b12a1bd2ebade87891ab7e36fdbce582301a92"
uuid = "23fbe1c1-3f47-55db-b15f-69d7ec21a316"
version = "0.15.6"
[[LazyArtifacts]]
deps = ["Artifacts", "Pkg"]
uuid = "4af54fe1-eca0-43a8-85a7-787d91b784e3"
[[LibCURL]]
deps = ["LibCURL_jll", "MozillaCACerts_jll"]
uuid = "b27032c2-a3e7-50c8-80cd-2d36dbcbfd21"
[[LibCURL_jll]]
deps = ["Artifacts", "LibSSH2_jll", "Libdl", "MbedTLS_jll", "Zlib_jll", "nghttp2_jll"]
uuid = "deac9b47-8bc7-5906-a0fe-35ac56dc84c0"
[[LibGit2]]
deps = ["Base64", "NetworkOptions", "Printf", "SHA"]
uuid = "76f85450-5226-5b5a-8eaa-529ad045b433"
[[LibSSH2_jll]]
deps = ["Artifacts", "Libdl", "MbedTLS_jll"]
uuid = "29816b5a-b9ab-546f-933c-edad1886dfa8"
[[Libdl]]
uuid = "8f399da3-3557-5675-b5ff-fb832c97cbdb"
[[Libffi_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "761a393aeccd6aa92ec3515e428c26bf99575b3b"
uuid = "e9f186c6-92d2-5b65-8a66-fee21dc1b490"
version = "3.2.2+0"
[[Libgcrypt_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Libgpg_error_jll", "Pkg"]
git-tree-sha1 = "64613c82a59c120435c067c2b809fc61cf5166ae"
uuid = "d4300ac3-e22c-5743-9152-c294e39db1e4"
version = "1.8.7+0"
[[Libglvnd_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libX11_jll", "Xorg_libXext_jll"]
git-tree-sha1 = "7739f837d6447403596a75d19ed01fd08d6f56bf"
uuid = "7e76a0d4-f3c7-5321-8279-8d96eeed0f29"
version = "1.3.0+3"
[[Libgpg_error_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "c333716e46366857753e273ce6a69ee0945a6db9"
uuid = "7add5ba3-2f88-524e-9cd5-f83b8a55f7b8"
version = "1.42.0+0"
[[Libiconv_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "42b62845d70a619f063a7da093d995ec8e15e778"
uuid = "94ce4f54-9a6c-5748-9c1c-f9c7231a4531"
version = "1.16.1+1"
[[Libmount_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "9c30530bf0effd46e15e0fdcf2b8636e78cbbd73"
uuid = "4b2f31a3-9ecc-558c-b454-b3730dcb73e9"
version = "2.35.0+0"
[[Libtiff_jll]]
deps = ["Artifacts", "JLLWrappers", "JpegTurbo_jll", "Libdl", "Pkg", "Zlib_jll", "Zstd_jll"]
git-tree-sha1 = "340e257aada13f95f98ee352d316c3bed37c8ab9"
uuid = "89763e89-9b03-5906-acba-b20f662cd828"
version = "4.3.0+0"
[[Libuuid_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "7f3efec06033682db852f8b3bc3c1d2b0a0ab066"
uuid = "38a345b3-de98-5d2b-a5d3-14cd9215e700"
version = "2.36.0+0"
[[LinearAlgebra]]
deps = ["Libdl"]
uuid = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
[[LogExpFunctions]]
deps = ["ChainRulesCore", "DocStringExtensions", "IrrationalConstants", "LinearAlgebra"]
git-tree-sha1 = "34dc30f868e368f8a17b728a1238f3fcda43931a"
uuid = "2ab3a3ac-af41-5b50-aa03-7779005ae688"
version = "0.3.3"
[[Logging]]
uuid = "56ddb016-857b-54e1-b83d-db4d58db5568"
[[MKL_jll]]
deps = ["Artifacts", "IntelOpenMP_jll", "JLLWrappers", "LazyArtifacts", "Libdl", "Pkg"]
git-tree-sha1 = "5455aef09b40e5020e1520f551fa3135040d4ed0"
uuid = "856f044c-d86e-5d09-b602-aeab76dc8ba7"
version = "2021.1.1+2"
[[MacroTools]]
deps = ["Markdown", "Random"]
git-tree-sha1 = "5a5bc6bf062f0f95e62d0fe0a2d99699fed82dd9"
uuid = "1914dd2f-81c6-5fcd-8719-6d5c9610ff09"
version = "0.5.8"
[[Markdown]]
deps = ["Base64"]
uuid = "d6f4376e-aef5-505a-96c1-9c027394607a"
[[MbedTLS]]
deps = ["Dates", "MbedTLS_jll", "Random", "Sockets"]
git-tree-sha1 = "1c38e51c3d08ef2278062ebceade0e46cefc96fe"
uuid = "739be429-bea8-5141-9913-cc70e7f3736d"
version = "1.0.3"
[[MbedTLS_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "c8ffd9c3-330d-5841-b78e-0817d7145fa1"
[[Measures]]
git-tree-sha1 = "e498ddeee6f9fdb4551ce855a46f54dbd900245f"
uuid = "442fdcdd-2543-5da2-b0f3-8c86c306513e"
version = "0.3.1"
[[Missings]]
deps = ["DataAPI"]
git-tree-sha1 = "bf210ce90b6c9eed32d25dbcae1ebc565df2687f"
uuid = "e1d29d7a-bbdc-5cf2-9ac0-f12de2c33e28"
version = "1.0.2"
[[Mmap]]
uuid = "a63ad114-7e13-5084-954f-fe012c677804"
[[MozillaCACerts_jll]]
uuid = "14a3606d-f60d-562e-9121-12d972cd8159"
[[NaNMath]]
git-tree-sha1 = "bfe47e760d60b82b66b61d2d44128b62e3a369fb"
uuid = "77ba4419-2d1f-58cd-9bb1-8ffee604a2e3"
version = "0.3.5"
[[NetworkOptions]]
uuid = "ca575930-c2e3-43a9-ace4-1e988b2c1908"
[[OffsetArrays]]
deps = ["Adapt"]
git-tree-sha1 = "c0e9e582987d36d5a61e650e6e543b9e44d9914b"
uuid = "6fe1bfb0-de20-5000-8ca7-80f57d26f881"
version = "1.10.7"
[[Ogg_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "7937eda4681660b4d6aeeecc2f7e1c81c8ee4e2f"
uuid = "e7412a2a-1a6e-54c0-be00-318e2571c051"
version = "1.3.5+0"
[[OpenSSL_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "15003dcb7d8db3c6c857fda14891a539a8f2705a"
uuid = "458c3c95-2e84-50aa-8efc-19380b2a3a95"
version = "1.1.10+0"
[[Opus_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "51a08fb14ec28da2ec7a927c4337e4332c2a4720"
uuid = "91d4177d-7536-5919-b921-800302f37372"
version = "1.3.2+0"
[[OrderedCollections]]
git-tree-sha1 = "85f8e6578bf1f9ee0d11e7bb1b1456435479d47c"
uuid = "bac558e1-5e72-5ebc-8fee-abe8a469f55d"
version = "1.4.1"
[[PCRE_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "b2a7af664e098055a7529ad1a900ded962bca488"
uuid = "2f80f16e-611a-54ab-bc61-aa92de5b98fc"
version = "8.44.0+0"
[[PaddedViews]]
deps = ["OffsetArrays"]
git-tree-sha1 = "646eed6f6a5d8df6708f15ea7e02a7a2c4fe4800"
uuid = "5432bcbf-9aad-5242-b902-cca2824c8663"
version = "0.5.10"
[[Parsers]]
deps = ["Dates"]
git-tree-sha1 = "a8709b968a1ea6abc2dc1967cb1db6ac9a00dfb6"
uuid = "69de0a69-1ddd-5017-9359-2bf0b02dc9f0"
version = "2.0.5"
[[Pixman_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "b4f5d02549a10e20780a24fce72bea96b6329e29"
uuid = "30392449-352a-5448-841d-b1acce4e97dc"
version = "0.40.1+0"
[[Pkg]]
deps = ["Artifacts", "Dates", "Downloads", "LibGit2", "Libdl", "Logging", "Markdown", "Printf", "REPL", "Random", "SHA", "Serialization", "TOML", "Tar", "UUIDs", "p7zip_jll"]
uuid = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f"
[[PlotThemes]]
deps = ["PlotUtils", "Requires", "Statistics"]
git-tree-sha1 = "a3a964ce9dc7898193536002a6dd892b1b5a6f1d"
uuid = "ccf2f8ad-2431-5c83-bf29-c5338b663b6a"
version = "2.0.1"
[[PlotUtils]]
deps = ["ColorSchemes", "Colors", "Dates", "Printf", "Random", "Reexport", "Statistics"]
git-tree-sha1 = "b084324b4af5a438cd63619fd006614b3b20b87b"
uuid = "995b91a9-d308-5afd-9ec6-746e21dbc043"
version = "1.0.15"
[[Plots]]
deps = ["Base64", "Contour", "Dates", "Downloads", "FFMPEG", "FixedPointNumbers", "GR", "GeometryBasics", "JSON", "Latexify", "LinearAlgebra", "Measures", "NaNMath", "PlotThemes", "PlotUtils", "Printf", "REPL", "Random", "RecipesBase", "RecipesPipeline", "Reexport", "Requires", "Scratch", "Showoff", "SparseArrays", "Statistics", "StatsBase", "UUIDs"]
git-tree-sha1 = "bc70e31a04f22780b57ad399ff94f9b78a1e7b39"
uuid = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
version = "1.22.5"
[[Preferences]]
deps = ["TOML"]
git-tree-sha1 = "00cfd92944ca9c760982747e9a1d0d5d86ab1e5a"
uuid = "21216c6a-2e73-6563-6e65-726566657250"
version = "1.2.2"
[[Printf]]
deps = ["Unicode"]
uuid = "de0858da-6303-5e67-8744-51eddeeeb8d7"
[[Qt5Base_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "Fontconfig_jll", "Glib_jll", "JLLWrappers", "Libdl", "Libglvnd_jll", "OpenSSL_jll", "Pkg", "Xorg_libXext_jll", "Xorg_libxcb_jll", "Xorg_xcb_util_image_jll", "Xorg_xcb_util_keysyms_jll", "Xorg_xcb_util_renderutil_jll", "Xorg_xcb_util_wm_jll", "Zlib_jll", "xkbcommon_jll"]
git-tree-sha1 = "ad368663a5e20dbb8d6dc2fddeefe4dae0781ae8"
uuid = "ea2cea3b-5b76-57ae-a6ef-0a8af62496e1"
version = "5.15.3+0"
[[REPL]]
deps = ["InteractiveUtils", "Markdown", "Sockets", "Unicode"]
uuid = "3fa0cd96-eef1-5676-8a61-b3b8758bbffb"
[[Random]]
deps = ["Serialization"]
uuid = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
[[RecipesBase]]
git-tree-sha1 = "44a75aa7a527910ee3d1751d1f0e4148698add9e"
uuid = "3cdcf5f2-1ef4-517c-9805-6587b60abb01"
version = "1.1.2"
[[RecipesPipeline]]
deps = ["Dates", "NaNMath", "PlotUtils", "RecipesBase"]
git-tree-sha1 = "7ad0dfa8d03b7bcf8c597f59f5292801730c55b8"
uuid = "01d81517-befc-4cb6-b9ec-a95719d0359c"
version = "0.4.1"
[[Reexport]]
git-tree-sha1 = "45e428421666073eab6f2da5c9d310d99bb12f9b"
uuid = "189a3867-3050-52da-a836-e630ba90ab69"
version = "1.2.2"
[[Requires]]
deps = ["UUIDs"]
git-tree-sha1 = "4036a3bd08ac7e968e27c203d45f5fff15020621"
uuid = "ae029012-a4dd-5104-9daa-d747884805df"
version = "1.1.3"
[[SHA]]
uuid = "ea8e919c-243c-51af-8825-aaa63cd721ce"
[[Scratch]]
deps = ["Dates"]
git-tree-sha1 = "0b4b7f1393cff97c33891da2a0bf69c6ed241fda"
uuid = "6c6a2e73-6563-6170-7368-637461726353"
version = "1.1.0"
[[Serialization]]
uuid = "9e88b42a-f829-5b0c-bbe9-9e923198166b"
[[SharedArrays]]
deps = ["Distributed", "Mmap", "Random", "Serialization"]
uuid = "1a1011a3-84de-559e-8e89-a11a2f7dc383"
[[Showoff]]
deps = ["Dates", "Grisu"]
git-tree-sha1 = "91eddf657aca81df9ae6ceb20b959ae5653ad1de"
uuid = "992d4aef-0814-514b-bc4d-f2e9a6c4116f"
version = "1.0.3"
[[Sockets]]
uuid = "6462fe0b-24de-5631-8697-dd941f90decc"
[[SortingAlgorithms]]
deps = ["DataStructures"]
git-tree-sha1 = "b3363d7460f7d098ca0912c69b082f75625d7508"
uuid = "a2af1166-a08f-5f64-846c-94a0d3cef48c"
version = "1.0.1"
[[SparseArrays]]
deps = ["LinearAlgebra", "Random"]
uuid = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
[[SpeechFeatures]]
deps = ["FFTW", "LinearAlgebra", "PaddedViews"]
git-tree-sha1 = "4fa048a3df49d432f959479b3d4a808105f22d3d"
uuid = "6f3487c4-5ca2-4050-bfeb-2cf56df92307"
version = "0.3.1"
[[StaticArrays]]
deps = ["LinearAlgebra", "Random", "Statistics"]
git-tree-sha1 = "3c76dde64d03699e074ac02eb2e8ba8254d428da"
uuid = "90137ffa-7385-5640-81b9-e52037218182"
version = "1.2.13"
[[Statistics]]
deps = ["LinearAlgebra", "SparseArrays"]
uuid = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
[[StatsAPI]]
git-tree-sha1 = "1958272568dc176a1d881acb797beb909c785510"
uuid = "82ae8749-77ed-4fe6-ae5f-f523153014b0"
version = "1.0.0"
[[StatsBase]]
deps = ["DataAPI", "DataStructures", "LinearAlgebra", "LogExpFunctions", "Missings", "Printf", "Random", "SortingAlgorithms", "SparseArrays", "Statistics", "StatsAPI"]
git-tree-sha1 = "65fb73045d0e9aaa39ea9a29a5e7506d9ef6511f"
uuid = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
version = "0.33.11"
[[StructArrays]]
deps = ["Adapt", "DataAPI", "StaticArrays", "Tables"]
git-tree-sha1 = "2ce41e0d042c60ecd131e9fb7154a3bfadbf50d3"
uuid = "09ab397b-f2b6-538f-b94a-2f83cf4a842a"
version = "0.6.3"
[[TOML]]
deps = ["Dates"]
uuid = "fa267f1f-6049-4f14-aa54-33bafae1ed76"
[[TableTraits]]
deps = ["IteratorInterfaceExtensions"]
git-tree-sha1 = "c06b2f539df1c6efa794486abfb6ed2022561a39"
uuid = "3783bdb8-4a98-5b6b-af9a-565f29a5fe9c"
version = "1.0.1"
[[Tables]]
deps = ["DataAPI", "DataValueInterfaces", "IteratorInterfaceExtensions", "LinearAlgebra", "TableTraits", "Test"]
git-tree-sha1 = "fed34d0e71b91734bf0a7e10eb1bb05296ddbcd0"
uuid = "bd369af6-aec1-5ad0-b16a-f7cc5008161c"
version = "1.6.0"
[[Tar]]
deps = ["ArgTools", "SHA"]
uuid = "a4e569a6-e804-4fa4-b0f3-eef7a1d5b13e"
[[Test]]
deps = ["InteractiveUtils", "Logging", "Random", "Serialization"]
uuid = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
[[URIs]]
git-tree-sha1 = "97bbe755a53fe859669cd907f2d96aee8d2c1355"
uuid = "5c2747f8-b7ea-4ff2-ba2e-563bfd36b1d4"
version = "1.3.0"
[[UUIDs]]
deps = ["Random", "SHA"]
uuid = "cf7118a7-6976-5b1a-9a39-7adc72f591a4"
[[Unicode]]
uuid = "4ec0a83e-493e-50e2-b9ac-8f72acf5a8f5"
[[WAV]]
deps = ["Base64", "FileIO", "Libdl", "Logging"]
git-tree-sha1 = "1d5dc6568ab6b2846efd10cc4d070bb6be73a6b8"
uuid = "8149f6b0-98f6-5db9-b78f-408fbbb8ef88"
version = "1.1.1"
[[Wayland_jll]]
deps = ["Artifacts", "Expat_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Pkg", "XML2_jll"]
git-tree-sha1 = "3e61f0b86f90dacb0bc0e73a0c5a83f6a8636e23"
uuid = "a2964d1f-97da-50d4-b82a-358c7fce9d89"
version = "1.19.0+0"
[[Wayland_protocols_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Wayland_jll"]
git-tree-sha1 = "2839f1c1296940218e35df0bbb220f2a79686670"
uuid = "2381bf8a-dfd0-557d-9999-79630e7b1b91"
version = "1.18.0+4"
[[XML2_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Libiconv_jll", "Pkg", "Zlib_jll"]
git-tree-sha1 = "1acf5bdf07aa0907e0a37d3718bb88d4b687b74a"
uuid = "02c8fc9c-b97f-50b9-bbe4-9be30ff0a78a"
version = "2.9.12+0"
[[XSLT_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Libgcrypt_jll", "Libgpg_error_jll", "Libiconv_jll", "Pkg", "XML2_jll", "Zlib_jll"]
git-tree-sha1 = "91844873c4085240b95e795f692c4cec4d805f8a"
uuid = "aed1982a-8fda-507f-9586-7b0439959a61"
version = "1.1.34+0"
[[Xorg_libX11_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libxcb_jll", "Xorg_xtrans_jll"]
git-tree-sha1 = "5be649d550f3f4b95308bf0183b82e2582876527"
uuid = "4f6342f7-b3d2-589e-9d20-edeb45f2b2bc"
version = "1.6.9+4"
[[Xorg_libXau_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "4e490d5c960c314f33885790ed410ff3a94ce67e"
uuid = "0c0b7dd1-d40b-584c-a123-a41640f87eec"
version = "1.0.9+4"
[[Xorg_libXcursor_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libXfixes_jll", "Xorg_libXrender_jll"]
git-tree-sha1 = "12e0eb3bc634fa2080c1c37fccf56f7c22989afd"
uuid = "935fb764-8cf2-53bf-bb30-45bb1f8bf724"
version = "1.2.0+4"
[[Xorg_libXdmcp_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "4fe47bd2247248125c428978740e18a681372dd4"
uuid = "a3789734-cfe1-5b06-b2d0-1dd0d9d62d05"
version = "1.1.3+4"
[[Xorg_libXext_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libX11_jll"]
git-tree-sha1 = "b7c0aa8c376b31e4852b360222848637f481f8c3"
uuid = "1082639a-0dae-5f34-9b06-72781eeb8cb3"
version = "1.3.4+4"
[[Xorg_libXfixes_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libX11_jll"]
git-tree-sha1 = "0e0dc7431e7a0587559f9294aeec269471c991a4"
uuid = "d091e8ba-531a-589c-9de9-94069b037ed8"
version = "5.0.3+4"
[[Xorg_libXi_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libXext_jll", "Xorg_libXfixes_jll"]
git-tree-sha1 = "89b52bc2160aadc84d707093930ef0bffa641246"
uuid = "a51aa0fd-4e3c-5386-b890-e753decda492"
version = "1.7.10+4"
[[Xorg_libXinerama_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libXext_jll"]
git-tree-sha1 = "26be8b1c342929259317d8b9f7b53bf2bb73b123"
uuid = "d1454406-59df-5ea1-beac-c340f2130bc3"
version = "1.1.4+4"
[[Xorg_libXrandr_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libXext_jll", "Xorg_libXrender_jll"]
git-tree-sha1 = "34cea83cb726fb58f325887bf0612c6b3fb17631"
uuid = "ec84b674-ba8e-5d96-8ba1-2a689ba10484"
version = "1.5.2+4"
[[Xorg_libXrender_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libX11_jll"]
git-tree-sha1 = "19560f30fd49f4d4efbe7002a1037f8c43d43b96"
uuid = "ea2f1a96-1ddc-540d-b46f-429655e07cfa"
version = "0.9.10+4"
[[Xorg_libpthread_stubs_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "6783737e45d3c59a4a4c4091f5f88cdcf0908cbb"
uuid = "14d82f49-176c-5ed1-bb49-ad3f5cbd8c74"
version = "0.1.0+3"
[[Xorg_libxcb_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "XSLT_jll", "Xorg_libXau_jll", "Xorg_libXdmcp_jll", "Xorg_libpthread_stubs_jll"]
git-tree-sha1 = "daf17f441228e7a3833846cd048892861cff16d6"
uuid = "c7cfdc94-dc32-55de-ac96-5a1b8d977c5b"
version = "1.13.0+3"
[[Xorg_libxkbfile_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libX11_jll"]
git-tree-sha1 = "926af861744212db0eb001d9e40b5d16292080b2"
uuid = "cc61e674-0454-545c-8b26-ed2c68acab7a"
version = "1.1.0+4"
[[Xorg_xcb_util_image_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_xcb_util_jll"]
git-tree-sha1 = "0fab0a40349ba1cba2c1da699243396ff8e94b97"
uuid = "12413925-8142-5f55-bb0e-6d7ca50bb09b"
version = "0.4.0+1"
[[Xorg_xcb_util_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libxcb_jll"]
git-tree-sha1 = "e7fd7b2881fa2eaa72717420894d3938177862d1"
uuid = "2def613f-5ad1-5310-b15b-b15d46f528f5"
version = "0.4.0+1"
[[Xorg_xcb_util_keysyms_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_xcb_util_jll"]
git-tree-sha1 = "d1151e2c45a544f32441a567d1690e701ec89b00"
uuid = "975044d2-76e6-5fbe-bf08-97ce7c6574c7"
version = "0.4.0+1"
[[Xorg_xcb_util_renderutil_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_xcb_util_jll"]
git-tree-sha1 = "dfd7a8f38d4613b6a575253b3174dd991ca6183e"
uuid = "0d47668e-0667-5a69-a72c-f761630bfb7e"
version = "0.3.9+1"
[[Xorg_xcb_util_wm_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_xcb_util_jll"]
git-tree-sha1 = "e78d10aab01a4a154142c5006ed44fd9e8e31b67"
uuid = "c22f9ab0-d5fe-5066-847c-f4bb1cd4e361"
version = "0.4.1+1"
[[Xorg_xkbcomp_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libxkbfile_jll"]
git-tree-sha1 = "4bcbf660f6c2e714f87e960a171b119d06ee163b"
uuid = "35661453-b289-5fab-8a00-3d9160c6a3a4"
version = "1.4.2+4"
[[Xorg_xkeyboard_config_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_xkbcomp_jll"]
git-tree-sha1 = "5c8424f8a67c3f2209646d4425f3d415fee5931d"
uuid = "33bec58e-1273-512f-9401-5d533626f822"
version = "2.27.0+4"
[[Xorg_xtrans_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "79c31e7844f6ecf779705fbc12146eb190b7d845"
uuid = "c5fb5394-a638-5e4d-96e5-b29de1b5cf10"
version = "1.4.0+3"
[[Zlib_jll]]
deps = ["Libdl"]
uuid = "83775a58-1f1d-513f-b197-d71354ab007a"
[[Zstd_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "cc4bf3fdde8b7e3e9fa0351bdeedba1cf3b7f6e6"
uuid = "3161d3a3-bdf6-5164-811a-617609db77b4"
version = "1.5.0+0"
[[libass_jll]]
deps = ["Artifacts", "Bzip2_jll", "FreeType2_jll", "FriBidi_jll", "HarfBuzz_jll", "JLLWrappers", "Libdl", "Pkg", "Zlib_jll"]
git-tree-sha1 = "5982a94fcba20f02f42ace44b9894ee2b140fe47"
uuid = "0ac62f75-1d6f-5e53-bd7c-93b484bb37c0"
version = "0.15.1+0"
[[libfdk_aac_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "daacc84a041563f965be61859a36e17c4e4fcd55"
uuid = "f638f0a6-7fb0-5443-88ba-1cc74229b280"
version = "2.0.2+0"
[[libpng_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Zlib_jll"]
git-tree-sha1 = "94d180a6d2b5e55e447e2d27a29ed04fe79eb30c"
uuid = "b53b4c65-9356-5827-b1ea-8c7a1a84506f"
version = "1.6.38+0"
[[libvorbis_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Ogg_jll", "Pkg"]
git-tree-sha1 = "c45f4e40e7aafe9d086379e5578947ec8b95a8fb"
uuid = "f27f6e37-5d2b-51aa-960f-b287f2bc3b7a"
version = "1.3.7+0"
[[nghttp2_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "8e850ede-7688-5339-a07c-302acd2aaf8d"
[[p7zip_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "3f19e933-33d8-53b3-aaab-bd5110c3b7a0"
[[x264_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "4fea590b89e6ec504593146bf8b988b2c00922b2"
uuid = "1270edf5-f2f9-52d2-97e9-ab00b5d0237a"
version = "2021.5.5+0"
[[x265_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "ee567a171cce03570d77ad3a43e90218e38937a9"
uuid = "dfaa095f-4041-5dcd-9319-2fabd8486b76"
version = "3.5.0+0"
[[xkbcommon_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Wayland_jll", "Wayland_protocols_jll", "Xorg_libxcb_jll", "Xorg_xkeyboard_config_jll"]
git-tree-sha1 = "ece2350174195bb31de1a63bea3a41ae1aa593b6"
uuid = "d8fb68d0-12a3-5cfd-a85a-d49703b185fd"
version = "0.9.1+5"
"""
# ╔═╡ Cell order:
# ╟─9a0f7ed6-2b6c-11ec-23c2-7fd783169875
# ╠═3d4495cd-3107-4855-8a57-b4154f7af653
# ╟─160a988a-20cb-4ff6-a6a6-7ace16f4a97a
# ╠═332e11ed-69ed-41a3-a1f1-4f718efdffa1
# ╠═cf2a2daa-3334-45d3-83fb-132c0c211d96
# ╠═4f43424d-4d92-4a47-8792-cf0e3a448292
# ╟─baaed408-18f7-4648-a04c-99c58abd907f
# ╠═bf913df5-348a-4a3e-8c96-e52dcceceb43
# ╠═2d9db55a-4826-4173-a3b2-998187803ad2
# ╟─83d7af4e-dddf-4621-81fb-4a5e11c1dcf0
# ╠═fe42604a-b264-43ec-abab-135fd13c326c
# ╠═303a712f-7048-4a55-a938-e92c0d396bfd
# ╠═575d5378-6fc1-4334-8975-913288136aa5
# ╟─d399c618-95e1-43c3-a9f4-1ce75e41f215
# ╠═8b3961a5-954d-4b05-8701-f21bed104d7a
# ╠═e705a092-6e9d-45c8-9194-dbbbd2fa61b7
# ╟─cc5a1be6-88bc-421a-a064-5f3bfa16e21d
# ╠═1031d2f5-18ec-44fc-b10f-640954991b31
# ╠═0d535c94-4a2f-4227-9c62-d757f94f581e
# ╠═84c10764-0d44-43d0-8c31-e34f33df5496
# ╟─00000000-0000-0000-0000-000000000001
# ╟─00000000-0000-0000-0000-000000000002
| SpeechFeatures | https://github.com/lucasondel/SpeechFeatures.jl.git |
|
[
"MIT"
] | 0.3.2 | f1980bf82ef171e8de32482397ab786303af3b15 | code | 176 | # SPDX-License-Identifier: MIT
module SpeechFeatures
using PaddedViews
using FFTW
include("utils.jl")
include("features.jl")
export filterbank, stft, mfcc, add_deltas
end
| SpeechFeatures | https://github.com/lucasondel/SpeechFeatures.jl.git |
|
[
"MIT"
] | 0.3.2 | f1980bf82ef171e8de32482397ab786303af3b15 | code | 1471 | # SPDX-License-Identifier: MIT
"""
stft(x, fs; kwargs...) -> S, fftlen
Compute the short-term Fourier transform of the signal `x`. `fs` is
the sampling frequency of the input signal. In addition to the spectrum,
the function also return the length of the FFT used.
"""
function stft(x_in; srate=16000, dithering=0, removedc=true, frameduration=0.025,
framestep=0.01, windowfn=HannWindow, windowexp=0.85,
preemph=0.97)
x = x_in .+ randn(length(x_in)) * dithering
x .-= sum(x) / length(x)
X = hcat(eachframe(x; srate, frameduration, framestep)...)
foreach(preemphasis!, eachcol(X))
window = windowfn(Int64(srate*frameduration)) .^ windowexp
foreach(x -> x .*= window, eachcol(X))
fftlen = Int64(2^ceil(log2(size(X, 1))))
pX = PaddedView(0, X, (fftlen, size(X, 2)))
rfft(pX, 1)[1:end-1,:], fftlen
end
"""
mfcc(mS; kwargs...)
Compute the cepstral coefficient from the the magnitude of the
mel-spectrum `mS`.
"""
function mfcc(mS; nceps=13, liftering=22)
C = dct(log.(mS), 1)[1:nceps,:]
if liftering > 0
lifter = makelifter(size(C,1), liftering)
foreach(x -> x .*= lifter, eachcol(C))
end
C
end
"""
add_deltas(X; order=2, winlen=2)
Add the derivatives to the features.
"""
function add_deltas(X; order=2, winlen=2)
X_and_deltas = [X]
for o in 1:order
push!(X_and_deltas, delta(X_and_deltas[end], winlen))
end
vcat(X_and_deltas...)
end
| SpeechFeatures | https://github.com/lucasondel/SpeechFeatures.jl.git |
|
[
"MIT"
] | 0.3.2 | f1980bf82ef171e8de32482397ab786303af3b15 | code | 3671 | # SPDX-License-Identifier: MIT
#======================================================================
Liftering
======================================================================#
function makelifter(N, L)
t = Vector(1:N)
1 .+ L/2 * sin.(π * t / L)
end
#======================================================================
Pre-emphasis
======================================================================#
function preemphasis!(x; k=0.97)
prev = x[1]
for i in 2:length(x)
prev2 = x[i]
x[i] = x[i] - k*prev
prev = prev2
end
end
#======================================================================
Delta features
======================================================================#
function delta(X::AbstractMatrix{T}, deltawin::Int = 2) where T
D, N = size(X)
Δ = zeros(T, D, N)
norm = T(2 * sum(collect(1:deltawin).^2))
for n in 1:N
for θ in 1:deltawin
Δ[:, n] += (θ * (X[:,min(N, n+θ)] - X[:,max(1,n-θ)])) / norm
end
end
Δ
end
#======================================================================
Framing
======================================================================#
struct FrameIterator{T<:AbstractVector}
x::T
framesize::Int64
hopsize::Int64
end
function Base.length(it::FrameIterator)
N = length(it.x)
N > it.framesize ? 1 + (N - it.framesize) ÷ it.hopsize : 0
end
function Base.iterate(it::FrameIterator, idx=1)
1 + length(it.x) - idx < it.framesize && return nothing
(view(it.x, idx:(idx + it.framesize - 1)), idx + it.hopsize)
end
eachframe(x::AbstractVector; srate=16000, frameduration=0.025, framestep=0.01) =
FrameIterator(x, Int64(srate * frameduration), Int64(srate * framestep))
#======================================================================
Window functions
======================================================================#
function HannWindow(T::Type, N::Int)
T(.5) .* (1 .- cos.(T(2π) .* Vector{T}(0:N-1) ./ (N-1)))
end
HannWindow(N::Int) = HannWindow(Float64, N)
function HammingWindow(T::Type, N::Int)
T(0.54) .- T(0.46) .* cos.(T(2π) .* Vector{T}(0:N-1) ./ (N-1))
end
HammingWindow(N::Int) = HammingWindow(Float64, N)
RectangularWindow(T::Type, N::Int) = ones(T, N)
RectangularWindow(N::Int) = RectangularWindow(Float64, N)
#======================================================================
Filter bank
======================================================================#
mel2freq(mel::Real) = 700 * (exp(mel / 1127) - 1)
freq2mel(freq::Real) = 1127 * (log(1 + (freq / 700)))
# Create a set of triangular filters
function filterbank(n::Int; srate::Real = 16000, fftlen::Int = 512,
lofreq::Real = 80, hifreq::Real = 7600)
# Convert the cut-off frequencies into mel
lomel = freq2mel(lofreq)
himel = freq2mel(hifreq)
# Centers (in mel and freqs) of the filterbank
melcenters = range(lomel, himel, length = n + 2)
freqcenters = mel2freq.(melcenters)
# Now get the centers in terms of FFT bins
bincenters = 1 .+ Int64.(floor.( fftlen .* freqcenters ./ srate ))
# Allocate the matrix which will store the filters
D = Int64(ceil(fftlen / 2))
F = zeros(n, D)
# Construct the "triangle"
for f = 1:n
d1 = bincenters[f + 1] - bincenters[f]
d2 = bincenters[f + 2] - bincenters[f + 1]
s = bincenters[f]
e = bincenters[f + 1]
F[f, s:e] = range(0, 1, length = d1 + 1)
s = bincenters[f + 1]
e = bincenters[f + 2]
F[f, s:e] = range(1, 0, length = d2 + 1)
end
F
end
@deprecate FilterBank(args...; kwargs...) filterbank(args...; kwargs...)
| SpeechFeatures | https://github.com/lucasondel/SpeechFeatures.jl.git |
|
[
"MIT"
] | 0.3.2 | f1980bf82ef171e8de32482397ab786303af3b15 | code | 3094 |
using Documenter
using PyCall
using SpeechFeatures
using Test
DocMeta.setdocmeta!(SpeechFeatures, :DocTestSetup, :(using SpeechFeatures),
recursive = true)
doctest(SpeechFeatures)
#######################################################################
# Utilities
@testset "Utils" begin
x = Vector(1:10)
f1 = collect(SpeechFeatures.eachframe(x; srate=10, frameduration=0.3, framestep=0.2))
f2 = [[1, 2, 3], [3, 4, 5], [5, 6, 7], [7, 8, 9]]
@test all(f1 .== f2)
lifter1 = SpeechFeatures.makelifter(10, 22)
lifter2 = [2.56546322, 4.09905813, 5.56956514, 6.94704899, 8.20346807,
9.31324532, 10.25378886, 11.00595195, 11.55442271, 11.88803586]
@test all(lifter1 .≈ lifter2)
X = Float64[1 2 3; 2 3 4]
Y1 = SpeechFeatures.delta(X)
Y2 = [5/10 6/10 5/10; 5/10 6/10 5/10]
@test all(Y1 .≈ Y2)
end
#######################################################################
# Window functions
@testset "Window functions" begin
N = 10
w1 = SpeechFeatures.RectangularWindow(N)
w2 = ones(N)
@test all(w1 .≈ w2)
w1 = SpeechFeatures.RectangularWindow(Float32, N)
@test eltype(w1) == Float32
w1 = SpeechFeatures.HannWindow(N)
w2 = 0.5 .* (1 .- cos.(2π .* Vector(0:N-1) ./ (N-1) ))
@test all(w1 .≈ w2)
w1 = SpeechFeatures.HannWindow(Float32, N)
@test eltype(w1) == Float32
w1 = SpeechFeatures.HammingWindow(N)
w2 = 0.54 .- 0.46 .* cos.(2π .* Vector(0:N-1) ./ (N-1) )
@test all(w1 .≈ w2)
w1 = SpeechFeatures.HammingWindow(Float32, N)
@test eltype(w1) == Float32
end
#######################################################################
# Filter bank
py"""
import numpy as np
def create_filter(num, fft_len, lo_freq, hi_freq, samp_freq):
filter_num = num
filter_mat = np.zeros((fft_len // 2, filter_num))
mel2freq = lambda mel: 700.0 * (np.exp(mel / 1127.0) - 1)
freq2mel = lambda freq: 1127 * (np.log(1 + (freq / 700.0)))
lo_mel = freq2mel(lo_freq);
hi_mel = freq2mel(hi_freq);
mel_c = np.linspace(lo_mel, hi_mel, filter_num + 2)
freq_c = mel2freq(mel_c);
point_c = freq_c / float(samp_freq) * fft_len
point_c = np.floor(point_c).astype('int')
for f in range(filter_num):
d1 = point_c[f + 1] - point_c[f]
d2 = point_c[f + 2] - point_c[f + 1]
filter_mat[point_c[f]:point_c[f + 1] + 1, f] = np.linspace(0, 1, d1 + 1)
filter_mat[point_c[f + 1]:point_c[f + 2] + 1, f] = np.linspace(1, 0, d2 + 1)
return filter_mat
"""
@testset "FBANK" begin
m = 12.75
f = 100.12
@test SpeechFeatures.mel2freq(m) ≈ 700 * (exp(m / 1127) - 1)
@test typeof(SpeechFeatures.mel2freq(Float32(m))) == Float32
@test SpeechFeatures.freq2mel(f) ≈ 1127 * log(1 + (f / 700))
@test typeof(SpeechFeatures.freq2mel(Float32(f))) == Float32
fbank1 = SpeechFeatures.FilterBank(26; srate = 16000, fftlen = 512,
lofreq = 80, hifreq = 7600);
fbank2 = py"create_filter(26, 512, 80, 7600, 16000)"
@test all(fbank1 .≈ fbank2')
end
| SpeechFeatures | https://github.com/lucasondel/SpeechFeatures.jl.git |
|
[
"MIT"
] | 0.3.2 | f1980bf82ef171e8de32482397ab786303af3b15 | docs | 480 | # Releases
## 0.3.2
* Avoid GC pressure by doing most of the operations in `stft` and
`mfcc` in-place.
## 0.3.1
* Renamed `FilterBank` to `filterbank` to homogenize the user interface.
The previous `FilterBank` function is still exported but is mark
as deprecated.
## 0.3.0
* Simplfied the code and refactored the user interface.
* Added Pluto notebook examples.
## 0.2.0
* The output features are in matrix form insteaad of array of arrays.
## 0.1.0
* initial release
| SpeechFeatures | https://github.com/lucasondel/SpeechFeatures.jl.git |
|
[
"MIT"
] | 0.3.2 | f1980bf82ef171e8de32482397ab786303af3b15 | docs | 1192 | # SpeechFeatures.jl
*SpeechFeatures* is a Julia package for extracting acoustic features
for speech technologies.
| **Test Status** |
|:-----------------:|
|  |
See the [changelog file](CHANGELOG.md) to check what's new since the
last release.
## Installation
The package can be installed with the Julia package manager. From the
Julia REPL, type ] to enter the Pkg REPL mode and run:
```
pkg> add SpeechFeatures
```
## Quick start
To get the [MFCC](https://en.wikipedia.org/wiki/Mel-frequency_cepstrum)
features:
```julia
using SpeechFeatures
# x = ... extracted signal
# fs = ... sampling frequency
S, fftlen = stft(x; srate=fs) # Complex short-term spectrum.
fbank = filterbank(26; fftlen=fftlen)
mS = fbank * abs.(S) # Magnitude of the Mel-spectrum.
MFCCs = mfcc(mS; nceps=13) # Standard MFCCs.
MFCCs_Δ_ΔΔ = add_deltas(MFCCs; order=2) # MFCCs + 1st and 2nd order derivatives.
```
Have a look at the [examples](https://github.com/lucasondel/SpeechFeatures.jl/tree/master/examples)
to get started.
## Author
[Lucas Ondel](https://lucasondel.github.io), [LISN](https://www.lisn.upsaclay.fr/) 2021
| SpeechFeatures | https://github.com/lucasondel/SpeechFeatures.jl.git |
|
[
"MIT"
] | 0.3.2 | f1980bf82ef171e8de32482397ab786303af3b15 | docs | 5104 | # Features extraction
## Loading an audio file
To extract any type of speech features you will need the audio signal
stored in an `Array`-like object and the sampling rate in Hertz.
SpeechFeatures does not provide a way to load these two elements from
audio files directly but there are several Julia packages to do this.
In this tutorial, we will use [WAV.jl](https://github.com/dancasimiro/WAV.jl).
For the rest of the tutorial, we assumed that you have installed the
WAV.jl package in your Julia environment.
First of all, as an example, we download an audio file from the
[TIMIT](https://catalog.ldc.upenn.edu/LDC93S1) corpus. In the Julia
REPL type:
```juliashowcase
julia> run(`wget https://catalog.ldc.upenn.edu/desc/addenda/LDC93S1.wav`)
```
Now, we load the audio waveform:
```
julia> using WAV
julia> channels, srate = wavread("LDC93S1.wav", format = "double")
```
Where `channels` is a `N`x`C` matrix where `N` is the length of the audio in
samples and `C` is the number of channels. Since TIMIT is mono recorded
it has only one channel. `format = "double"` indicates that the
signals in `channels` will be encoded with double precision and each
sample of the signal will be between `1.0` and `-1.0`.
!!! warning
The `wavread` function also accepts `format = "native"` which will
return the data in the format it is stored in the WAV file. We
discourage its use as extracting the features from integer or
floating point encoded signal can lead to drastically different
output.
We get the signal from the `channels` matrix:
```julia
julia> x = channels[:, 1]
```
As a sanity check, we print the sampling rate and duration of the
signal:
```julia
julia> println("sampling freq: $srate Hz\nduration: $(round(length(x) / srate, digits=2)) s")
sampling freq: 16000.0 Hz
duration: 2.92 s
```
and we plot the waveform:
```julia
julia> using Plots
julia> pyplot()
julia> t = range(0, length(x) / srate, length=length(x))
julia> plot(t, x, size = (1000, 300), xlabel = "time (seconds)", legend = false)
```

## Extracting the features
All the different types of features supported by this package follow
the same extraction scheme.
1. create a the feature extractor object with a specific configuration
2. send the signal(s) to this extractor to get the features.
SpeechFeatures provides the following feature extractor:
| Extractor | Constructor | Description |
|:-----------|:------------|:------------|
| Log magnitude spectrum | `LogMagnitudeSpectrum([options])` | Logarithm of the magnitude of the Short Term Fourier Transform (STFT) |
| Log Mel Spectrum | `LogMelSpectrum([options])` | Logarithm of the STFT transformed via a mel-spaced filter bank. |
| Mel Cepsral Coefficients (MFCCs) | `MFCC([options])` | Classical MFCC features |
As an example, we will use the popular Mel Frequency Cepstral
Coefficients (MFCC) features. First we create the extractor
with the default configuration:
```julia
julia> mfcc = MFCC()
```
and then, we extract and plot the features from our TIMIT sample:
```julia
julia> fea = x |> mfcc
```
Here is the list of possible options for each extractor
| Option name | Default | Supported by | Description |
|:------------|:--------|:-------------|:-------------|
| `removedc` | `true` | all | Remove the direct component from the signal. |
| `dithering` | `true` | all | Add Gaussian white noise with `dithering` stdandard deviation. |
| `srate` | `16000` | all | Sampling rate in Hz of the input signal |
| `frameduration` | `0.025` | all | Frame duration in seconds. |
| `framestep` | `0.011` | all | Frame step (hop size) in seconds. |
| `preemphasis` | `0.97` | all | Preemphasis filter coefficient. |
| `windowfn` | `SpeechFeatures.HannWindow` | all | Windowing function (others are `HammingWindow` or `RectangularWindow`). |
| `windowpower` | `0.85` | all | Sharpening exponent of the window. |
| `nfilters` | `26` | LogMelSpectrum \| MFCC | Number of filters in the filter bank. |
| `lofreq` | `80` | LogMelSpectrum \| MFCC | Low cut-off frequency in Hz for the filter bank. |
| `hifreq` | `7600` | LogMelSpectrum \| MFCC | High cut-off frequency in Hz for the filter bank. |
| `addenergy` | `true` | MFCC | Append the per-frame energy to the features. |
| `nceps` | `12` | MFCC | Number of cepstral coefficients. |
| `liftering` | `22` | MFCC | Liftering coefficient. |
## Deltas and mean normalization
The deltas and acceleration coefficients (i.e. "double deltas") can
be computed by chaining the features extraction with the
deltas features extractor:
```julia
julia> Δ_ΔΔ = DeltaCoeffs(order = 2, deltawin = 2)
julia> fea = x |> mfcc |> Δ_ΔΔ
```
The `order` parameter is the order of the deltas coefficients, i.e.
`order = 2` means that the first and second deltas (acceleration)
coefficients will be computed. `deltawin` is the length of the delta
window.
Similarly, to remove the mean of the utterance you can add one more
element to the chain:
```julia
julia> mnorm = MeanNorm()
julia> fea = x |> mfcc |> Δ_ΔΔ |> mnorm
```
| SpeechFeatures | https://github.com/lucasondel/SpeechFeatures.jl.git |
|
[
"MIT"
] | 0.3.2 | f1980bf82ef171e8de32482397ab786303af3b15 | docs | 480 | # SpeechFeatures.jl
[SpeechFeatures.jl](https://github.com/lucasondel/SpeechFeatures.jl)
is a Julia package for extracting acoustic features for speech
technologies.
## Authors
* Lucas Ondel ([website](https://lucasondel.github.io/)
## Installation
The package can be installed with the Julia package manager. From the
Julia REPL, type ] to enter the Pkg REPL mode and run:
```julia
pkg> add SpeechFeatures
```
## Manual Outline
```@contents
Pages = ["feaextract.md"]
```
| SpeechFeatures | https://github.com/lucasondel/SpeechFeatures.jl.git |
|
[
"MIT"
] | 0.3.2 | f1980bf82ef171e8de32482397ab786303af3b15 | docs | 93 | These files are intended to be run as [Pluto notebooks](https://github.com/fonsp/Pluto.jl).
| SpeechFeatures | https://github.com/lucasondel/SpeechFeatures.jl.git |
|
[
"MIT"
] | 2.1.8 | 60064ee64e69dea5fbb380ce80c5714268e1761b | code | 153 | using Documenter, AzSessions
makedocs(sitename="AzSessions", modules=[AzSessions])
deploydocs(
repo = "github.com/ChevronETC/AzSessions.jl.git",
)
| AzSessions | https://github.com/ChevronETC/AzSessions.jl.git |
|
[
"MIT"
] | 2.1.8 | 60064ee64e69dea5fbb380ce80c5714268e1761b | code | 31360 | module AzSessions
using Base64, Dates, HTTP, JSON, JSONWebTokens, Logging, MbedTLS, Sockets
function logerror(e, loglevel=Logging.Info)
io = IOBuffer()
showerror(io, e)
write(io, "\n\terror type: $(typeof(e))\n")
local my_current_exceptions
if VERSION < v"1.7"
my_current_exceptions = Base.catch_backtrace
else
my_current_exceptions = current_exceptions
end
for (exc, bt) in my_current_exceptions()
showerror(io, exc, bt)
println(io)
end
@logmsg loglevel String(take!(io))
close(io)
end
const _manifest = Dict("client_id"=>"", "client_secret"=>"", "tenant"=>"", "protocol"=>"")
manifestpath() = joinpath(homedir(), ".azsessions")
manifestfile() = joinpath(manifestpath(), "manifest.json")
# allow for the correct spelling of "protocol" and a common mis-spelling ("protocal")
function spelling_mistake(protocol::AbstractString, protocal::AbstractString)
if protocol == "" && protocal != ""
protocol = protocal
end
protocol
end
function spelling_mistake(protocol, protocal)
if protocol === nothing && protocal !== nothing
protocol = protocal
end
protocol
end
"""
AzSessions.write_manifest(;client_id="", client_secret="", tenant="", protocol="")
Write an AzSessions manifest file (~/.azsessions/manifest.json). The
manifest file contains account specific credentials.
# Notes
## client secret
The client can be configured such that the `client_secret` is not
required for the authorization-code-flow and device-code-flow. In this
scenario, one may choose to omit setting the `client_secret` in the manifest.
For example:
```julia
AzSessions.write_manifest(;client_id="myclientid", tenant="mytenant")
```
## protocol
The protocol is one of "AzAuthCodeFlowCredentials", "AzDeviceCodeFlowCredentials", "AzClientCredentials"
and "AzVMCredentials". If the default `protocol=""` is chosen for the manifest, then `AzSession()` will
default to `AzDeviceCodeFlowCredentials`. The protocol in the manifest can always be over-ridden using
the `protocol` argument to `AzSession`.
"""
function write_manifest(;client_id="", client_secret = "", tenant="", protocol="", protocal="")
manifest = Dict("client_id"=>client_id, "client_secret"=>client_secret, "tenant"=>tenant, "protocol"=>spelling_mistake(string(protocal), string(protocol)))
try
isdir(manifestpath()) || mkdir(manifestpath(); mode=0o700)
write(manifestfile(), json(manifest, 1))
chmod(manifestfile(), 0o600)
catch e
@error "Failed to write manifest file, $(AzSessions.manifestfile())"
throw(e)
end
end
function load_manifest()
if isfile(manifestfile())
try
manifest = JSON.parse(read(manifestfile(), String))
for key in keys(_manifest)
_manifest[key] = get(manifest, key, "")
end
catch e
@error "Manifest file ($(AzSessions.manifestfile())) is not valid JSON"
throw(e)
end
else
@error "Manifest file ($(AzSessions.manifestfile())) does not exist. Use AzSessions.write_manifest to generate a manifest file."
end
end
#
# retry logic
#
function isretryable(e::HTTP.Exceptions.StatusError, s)
e.status == 404 && (return true,s)
e.status >= 500 && (return true,s)
if e.status == 429
for header in e.response.headers
if lowercase(header[1]) == "retry-after"
s = parse(Int, header[2]) + rand()
return true,s
end
end
end
if e.status == 400
b = JSON.parse(String(e.response.body))
if first(get(b, "error_codes", [])) == 50196 # server encountered a client request loop
@warn "received client request loop error code [50196]."
s = rand(120:180) # chosen emperically
return true,s
end
end
false,s
end
isretryable(e::Base.IOError, s) = true,s
isretryable(e::HTTP.Exceptions.ConnectError, s) = true,s
isretryable(e::HTTP.Exceptions.RequestError, s) = true,s
isretryable(e::HTTP.Exceptions.TimeoutError, s) = true,s
isretryable(e::MbedTLS.MbedException, s) = true,s
isretryable(e::Base.EOFError, s) = true,s
isretryable(e::Sockets.DNSError, s) = true,s
isretryable(e, s) = false,s
function retrywarn(i, s, e)
@warn "retry $i, sleeping for $s seconds"
logerror(e, Logging.Warn)
end
macro retry(retries, ex::Expr)
quote
local r
for i = 1:($(esc(retries))+1)
try
r = $(esc(ex))
break
catch e
maximum_backoff = 60
s = min(2.0^(i-1), maximum_backoff) + rand()
_isretryable,s = isretryable(e, s)
(i < $(esc(retries)) && _isretryable) || throw(e)
retrywarn(i, s, e)
sleep(s)
end
end
r
end
end
abstract type AzSessionAbstract end
"""
token(session[; offset=Second(rand(300:600))])
Return the OAuth2 token associate with `session`. The `offset` ensures
that the token is valid for at least `offset` time. The default offset
is randomized between 5 and 15 minutes. We randomize the offset to avoid
calling the Azure authentication end-point at the same time from many
VMs operating in parallel.
"""
function token end
"""
scrub!(session)
Remove sensitive information from `session` (e.g. token, client secret)
"""
function scrub! end
#
# Client credentials
#
struct AzClientCredentials end
mutable struct AzClientCredentialsSession <: AzSessionAbstract
protocol::String
client_id::String
client_secret::String
expiry::DateTime
resource::String
tenant::String
token::String
end
function AzClientCredentialsSession(;
client_id = _manifest["client_id"],
client_secret = _manifest["client_secret"],
resource = "https://management.azure.com/",
tenant = _manifest["tenant"])
client_secret == "" && error("AzClientCredentials requires client_secret, but got client_secret=\"\"")
AzClientCredentialsSession(string(AzClientCredentials), client_id, client_secret, now(Dates.UTC), resource, tenant, "")
end
function AzClientCredentialsSession(d::Dict)
AzClientCredentialsSession(
spelling_mistake(get(d, "protocol", ""), get(d, "protocal", "")),
d["client_id"],
d["client_secret"],
DateTime(d["expiry"]),
d["resource"],
d["tenant"],
d["token"])
end
function Base.copy(session::AzClientCredentialsSession)
AzClientCredentialsSession(
session.protocol,
session.client_id,
session.client_secret,
session.expiry,
session.resource,
session.tenant,
session.token)
end
unqualify_protocol_string(protocol) = replace(protocol, "AzSessions."=>"")
function samesession(session1::AzClientCredentialsSession, session2::AzClientCredentialsSession)
unqualify_protocol_string(session1.protocol) == unqualify_protocol_string(session2.protocol) &&
session1.client_id == session2.client_id &&
session1.client_secret == session2.client_secret &&
session1.resource == session2.resource &&
session1.tenant == session2.tenant
end
function token(session::AzClientCredentialsSession; offset=Second(rand(300:600)))
session.token != "" && now(Dates.UTC) < (session.expiry - offset) && return session.token
r = @retry 10 HTTP.request(
"POST",
"https://login.microsoft.com/$(session.tenant)/oauth2/token",
["Content-Type" => "application/x-www-form-urlencoded"],
"grant_type=client_credentials&client_id=$(session.client_id)&client_secret=$(HTTP.escapeuri(session.client_secret))&resource=$(HTTP.escapeuri(session.resource))",
retry = false)
rbody = JSON.parse(String(r.body))
session.token = rbody["access_token"]
session.expiry = now(Dates.UTC) + Dates.Second(rbody["expires_in"])
session.token
end
function scrub!(session::AzClientCredentialsSession)
session.token = ""
session.client_secret = ""
session
end
Base.show(io::IO, session::AzClientCredentialsSession) = write(io, "Azure client credentials session")
#
# VirtualMachine credentials
#
struct AzVMCredentials end
mutable struct AzVMSession <: AzSessionAbstract
protocol::String
expiry::DateTime
resource::String
token::String
end
function AzVMSession(;resource = "https://management.azure.com/")
AzVMSession(string(AzVMCredentials), now(Dates.UTC), resource, "")
end
function AzVMSession(d::Dict)
AzVMSession(
spelling_mistake(get(d, "protocol", ""), get(d, "protocal", "")),
DateTime(d["expiry"]),
d["resource"],
d["token"])
end
function Base.copy(session::AzVMSession)
AzVMSession(
session.protocol,
session.expiry,
session.resource,
session.token)
end
function samesession(session1::AzVMSession, session2::AzVMSession)
unqualify_protocol_string(session1.protocol) == unqualify_protocol_string(session2.protocol) && session1.resource == session2.resource
end
function token(session::AzVMSession; offset=Second(rand(300:600)))
session.token != "" && now(Dates.UTC) < (session.expiry - offset) && return session.token
r = @retry 10 HTTP.request(
"GET",
"http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=$(session.resource)",
["Metadata"=>"true"],
retry = false)
rbody = JSON.parse(String(r.body))
session.token = rbody["access_token"]
session.expiry = now(Dates.UTC) + Dates.Second(rbody["expires_in"])
session.token
end
function scrub!(session::AzVMSession)
session.token = ""
session
end
Base.show(io::IO, session::AzVMSession) = write(io, "Azure virtual machine credentials session")
function mergescopes(scope1, scope2)
scopes1 = split(scope1, (' ', '+'))
scopes2 = split(scope2, (' ', '+'))
join(union(scopes1, scopes2), '+')
end
#
# Authorization code flow credentials
#
struct AzAuthCodeFlowCredentials end
mutable struct AzAuthCodeFlowSession <: AzSessionAbstract
protocol::String
client_id::String
expiry::DateTime
id_token::String
lock::Bool
redirect_uri::String
refresh_token::String
scope::String
scope_auth::String
tenant::String
token::String
end
function AzAuthCodeFlowSession(;
client_id = _manifest["client_id"],
redirect_uri = "http://localhost:44300/reply",
scope = "openid+offline_access+https://management.azure.com/user_impersonation",
scope_auth = "openid+offline_access+https://management.azure.com/user_impersonation+https://storage.azure.com/user_impersonation",
tenant = _manifest["tenant"])
AzAuthCodeFlowSession(string(AzAuthCodeFlowCredentials), client_id, now(Dates.UTC), "", false, redirect_uri, "", scope, mergescopes(scope, scope_auth), tenant, "")
end
function AzAuthCodeFlowSession(d::Dict)
AzAuthCodeFlowSession(
spelling_mistake(get(d, "protocol", ""), get(d, "protocal", "")),
d["client_id"],
DateTime(d["expiry"]),
d["id_token"],
d["lock"],
d["redirect_uri"],
d["refresh_token"],
d["scope"],
d["scope_auth"],
d["tenant"],
d["token"])
end
function AzSession(session::AzAuthCodeFlowSession; scope="", lazy=false)
scope == "" && (scope = session.scope)
_session = AzAuthCodeFlowSession(
session.protocol,
session.client_id,
session.expiry,
session.id_token,
session.lock,
session.redirect_uri,
session.refresh_token,
scope,
session.scope_auth,
session.tenant,
session.token)
lazy || token(_session)
_session
end
function Base.copy(session::AzAuthCodeFlowSession)
AzAuthCodeFlowSession(
session.protocol,
session.client_id,
session.expiry,
session.id_token,
session.lock,
session.redirect_uri,
session.refresh_token,
session.scope,
session.scope_auth,
session.tenant,
session.token)
end
function samesession(session1::AzAuthCodeFlowSession, session2::AzAuthCodeFlowSession)
unqualify_protocol_string(session1.protocol) == unqualify_protocol_string(session2.protocol) &&
session1.client_id == session2.client_id &&
session1.redirect_uri == session2.redirect_uri &&
samescope(session1.scope, session2.scope) &&
samescope(session1.scope_auth, session2.scope_auth) &&
session1.tenant == session2.tenant
end
session_has_tokens(session::AzAuthCodeFlowSession) = session.token != "" && session.refresh_token != ""
function update_session_from_cached_session!(session::AzAuthCodeFlowSession, cached_session::AzAuthCodeFlowSession)
session.expiry = cached_session.expiry
session.id_token = cached_session.id_token
session.refresh_token = cached_session.refresh_token
session.token = cached_session.token
end
function audience_from_token(token)
local audience
try
decodedJWT = JSONWebTokens.decode(JSONWebTokens.None(), token)
audience = get(decodedJWT, "aud", "")
catch
@warn "Unable to retrieve audience from token."
audience = ""
end
audience
end
function audience_from_scope(scope)
scopes = split(scope, ('+',' '))
i = findfirst(_scope->startswith(_scope, "https://"), scopes)
"https://"*split(replace(scopes[i], "https://"=>""), '/')[1]
end
function _token(session::AzAuthCodeFlowSession, bootstrap=false; offset=Second(rand(300:600)))
while session.lock
sleep(1)
end
session.lock = true
# use the existing token:
if session.token != "" && now(Dates.UTC) < (session.expiry - offset) && audience_from_token(session.token) == audience_from_scope(session.scope)
session.lock = false
return session.token
end
# use the refresh token to get a new token:
if session.refresh_token != "" && refresh_token(session)
session.lock = false
return session.token
end
if bootstrap_token_from_cache!(session, bootstrap; offset)
session.lock = false
return ""
end
# otherwise, user is required to authenticate:
port = parse(Int, parse(HTTP.URI, session.redirect_uri).port)
state = rand(Int)
auth_code = ""
@debug "starting server..."
local server
try
server = Sockets.listen(Sockets.localhost, port)
catch
error("AzSessions: there is already a server listening on port $port")
end
with_logger(NullLogger()) do
tsk = @async HTTP.serve(Sockets.localhost, port; server=server) do request::HTTP.Request
queries = split(parse(HTTP.URI, request.target).query, '&')
for query in queries
q = split(query, '=')
if q[1] == "code"
auth_code = q[2]
break
end
end
HTTP.Response(200, "Logged in via AzSessions.jl")
end
end
authcode_uri = "https://login.microsoft.com/$(session.tenant)/oauth2/v2.0/authorize?client_id=$(session.client_id)&response_type=code&redirect_uri=$(session.redirect_uri)&response_mode=query&scope=$(session.scope_auth)&state=$state&prompt=select_account"
exitcode = 1
if Sys.iswindows()
cmd = get(ENV, "COMSPEC", "cmd")
_authcode_uri = replace(authcode_uri, "&"=>"^&")
c = open(`$cmd /c start $_authcode_uri`)
wait(c)
exitcode = c.exitcode
elseif Sys.islinux()
c = open(`gio open $authcode_uri`)
wait(c)
exitcode = c.exitcode
end
if exitcode != 0
@info "Failed to open browser. To authenticate, please open the following url on the local machine:\n\t$authcode_uri"
end
while auth_code == ""
sleep(1)
end
close(server)
token_uri = "https://login.microsoftonline.com/$(session.tenant)/oauth2/v2.0/token"
token_body = "client_id=$(session.client_id)&scope=$(session.scope)&code=$auth_code&redirect_uri=$(session.redirect_uri)&grant_type=authorization_code"
@debug "trading auth code for token..."
r = @retry 10 HTTP.request(
"POST",
token_uri,
["Content-Type"=>"application/x-www-form-urlencoded"],
token_body;
retry = false)
rbody = JSON.parse(String(r.body))
session.token = rbody["access_token"]
session.id_token = get(rbody, "id_token", "") # only exists if openid is used in the scope
session.refresh_token = get(rbody, "refresh_token", "") # online exists if offline_access is used in the scope
session.expiry = now(Dates.UTC) + Dates.Second(rbody["expires_in"])
session_has_tokens(session) && record_session(session)
session.token
end
function scrub!(session::AzAuthCodeFlowSession)
session.token = ""
session.id_token = ""
session.refresh_token = ""
session
end
Base.show(io::IO, session::AzAuthCodeFlowSession) = write(io, "Azure authorization code flow session")
#
# Device code flow credentials
#
struct AzDeviceCodeFlowCredentials end
mutable struct AzDeviceCodeFlowSession <: AzSessionAbstract
protocol::String
client_id::String
expiry::DateTime
id_token::String
lock::Bool
refresh_token::String
scope::String
scope_auth::String
tenant::String
token::String
end
function AzDeviceCodeFlowSession(;
client_id = _manifest["client_id"],
scope = "openid+offline_access+https://management.azure.com/user_impersonation",
scope_auth = "openid+offline_access+https://management.azure.com/user_impersonation+https://storage.azure.com/user_impersonation",
tenant = _manifest["tenant"])
AzDeviceCodeFlowSession(string(AzDeviceCodeFlowCredentials), client_id, now(Dates.UTC), "", false, "", scope, mergescopes(scope, scope_auth), tenant, "")
end
function AzDeviceCodeFlowSession(d::Dict)
AzDeviceCodeFlowSession(
spelling_mistake(get(d, "protocol", ""), get(d, "protocal", "")),
d["client_id"],
DateTime(d["expiry"]),
d["id_token"],
d["lock"],
d["refresh_token"],
d["scope"],
d["scope_auth"],
d["tenant"],
d["token"])
end
function AzSession(session::AzDeviceCodeFlowSession; scope="", lazy=false)
scope == "" && (scope = session.scope)
_session = AzDeviceCodeFlowSession(
session.protocol,
session.client_id,
session.expiry,
session.id_token,
session.lock,
session.refresh_token,
scope,
session.scope_auth,
session.tenant,
session.token)
lazy || token(_session)
_session
end
function Base.copy(session::AzDeviceCodeFlowSession)
AzDeviceCodeFlowSession(
session.protocol,
session.client_id,
session.expiry,
session.id_token,
session.lock,
session.refresh_token,
session.scope,
session.scope_auth,
session.tenant,
session.token)
end
function samesession(session1::AzDeviceCodeFlowSession, session2::AzDeviceCodeFlowSession)
unqualify_protocol_string(session1.protocol) == unqualify_protocol_string(session2.protocol) &&
session1.client_id == session2.client_id &&
samescope(session1.scope, session2.scope) &&
session1.tenant == session2.tenant
end
session_has_tokens(session::AzDeviceCodeFlowSession) = session.token != "" && session.refresh_token != ""
function update_session_from_cached_session!(session::AzDeviceCodeFlowSession, cached_session::AzDeviceCodeFlowSession)
session.expiry = cached_session.expiry
session.id_token = cached_session.id_token
session.refresh_token = cached_session.refresh_token
session.token = cached_session.token
end
function _token(session::AzDeviceCodeFlowSession, bootstrap=false; offset=Second(rand(300:600)))
while session.lock
sleep(1)
end
session.lock = true
# use the existing token:
if session.token != "" && now(Dates.UTC) < (session.expiry - offset) && audience_from_scope(session.scope) == audience_from_token(session.token)
session.lock = false
return session.token
end
# use the refresh token to get a new token:
if session.refresh_token != "" && refresh_token(session)
session.lock = false
return session.token
end
if bootstrap_token_from_cache!(session, bootstrap; offset)
session.lock = false
return session.token
end
_r = @retry 1 HTTP.request(
"POST",
"https://login.microsoft.com/$(session.tenant)/oauth2/v2.0/devicecode",
["Content-Type"=>"application/x-www-form-urlencoded"],
"client_id=$(session.client_id)&scope=$(session.scope)")
r = JSON.parse(String(_r.body))
device_code = r["device_code"]
@info r["message"]
flush(stdout)
flush(stderr)
local _r
while true
_r = @retry 1 HTTP.request(
"POST",
"https://login.microsoft.com/$(session.tenant)/oauth2/v2.0/token",
["Content-Type"=>"application/x-www-form-urlencoded"],
"grant_type=urn:ietf:params:oauth:grant-type:device_code&client_id=$(session.client_id)&device_code=$device_code";
status_exception = false,
retry = false)
_r.status == 200 && break
__r = String(_r.body)
r = JSON.parse(__r)
if r["error"] == "authorization_pending"
sleep(5)
else
error(__r)
end
end
r = JSON.parse(String(_r.body))
session.id_token = get(r, "id_token", "") # only exists if openid is used in scope
session.refresh_token = get(r, "refresh_token", "") # only exists if offline_access is used in scope
session.token = r["access_token"]
session.expiry = now(Dates.UTC) + Dates.Second(r["expires_in"])
session_has_tokens(session) && record_session(session)
session.token
end
function refresh_token(session::Union{AzAuthCodeFlowSession, AzDeviceCodeFlowSession})
resource = audience_from_scope(session.scope)
body = "client_id=$(session.client_id)&refresh_token=$(session.refresh_token)&grant_type=refresh_token&scope=$(session.scope)&resource=$resource"
r = @retry 10 HTTP.request(
"POST",
"https://login.microsoftonline.com/$(session.tenant)/oauth2/token",
["Content-Type"=>"application/x-www-form-urlencoded"],
body;
retry = false)
rbody = JSON.parse(String(r.body))
local status
if haskey(rbody, "error")
status = false
else
status = true
session.token = rbody["access_token"]
session.refresh_token = rbody["refresh_token"]
session.expiry = now(Dates.UTC) + Dates.Second(rbody["expires_in"])
end
status
end
function scrub!(session::AzDeviceCodeFlowSession)
session.token = ""
session.id_token = ""
session.refresh_token = ""
session
end
Base.show(io::IO, session::AzDeviceCodeFlowSession) = write(io, "Azure device code flow credentials session")
function AzCredentials(protocol::AbstractString)
protocols = Dict("AzClientCredentials"=>AzClientCredentials, "AzDeviceCodeCredentials"=>AzDeviceCodeFlowCredentials, "AzAuthCodeFlowCredentials"=>AzAuthCodeFlowCredentials, "AzVMCredentials"=>AzVMCredentials, ""=>nothing)
if !haskey(protocols, protocol)
error("Authentication protocol, $protocol, is not recognized.")
end
protocols[protocol]
end
#
# Recording sessions to disk
#
sessionpath() = joinpath(homedir(), ".azsessions")
sessionfile() = joinpath(sessionpath(), "sessions.json")
function bootstrap_token_from_cache!(session, bootstrap; offset)
cached_session, session_is_recorded = get_recorded_session(session)
if session_is_recorded
if bootstrap == false
update_session_from_cached_session!(session, cached_session)
session.lock = false
token(session, true; offset)
return true
else
@warn "failed to use cached token, token cache may be corrupted."
end
end
false
end
unqualify_json_sessions(json_sessions) = replace(json_sessions, "AzSessions."=>"")
function recorded_sessions()
local rsessions
if isfile(sessionfile())
rsessions = JSON.parse(unqualify_json_sessions(read(sessionfile(), String)))
else
rsessions = Dict("sessions"=>[])
end
rsessions
end
function write_sessions(rsessions)
rm(sessionfile(); force=true)
write(sessionfile(), unqualify_json_sessions(json(rsessions)))
chmod(sessionfile(), 0o400)
end
function record_session(session)
if !isdir(sessionpath())
try
mkdir(sessionpath(); mode=0o700)
catch
@warn "unable to make directory $(sessionpath()): will not record sessions"
return
end
end
rsessions = recorded_sessions()
has_session = false
for (i,rsession) in enumerate(rsessions)
if samesession(session, rsession)
rsessions[i] = json(session)
has_session = true
end
end
if !has_session
pushfirst!(rsessions["sessions"], json(session))
end
write_sessions(rsessions)
end
samesession(session1, session2) = false
function samescope(scope1, scope2)
scopes1 = split(scope1, '+')
scopes2 = split(scope2, '+')
if length(scopes1) != length(scopes2)
return false
else
for _scope1 in scopes1
if _scope1 ∉ scopes2
return false
end
end
end
true
end
function get_recorded_session(session)
rsessions = recorded_sessions()
for json_recorded_session in rsessions["sessions"]
recorded_session = AzSession(json_recorded_session)
if samesession(session, recorded_session)
return recorded_session, true
end
end
session, false
end
function delete_session(session)
rsessions = recorded_sessions()
i = 0
for (isession, json_recorded_session) in enumerate(rsessions["sessions"])
recorded_session = AzSession(json_recorded_session)
if samesession(session, recorded_session)
i = isession
break
end
end
if i > 0
deleteat!(rsessions["sessions"], i)
end
write_sessions(rsessions)
end
function token(session::Union{AzAuthCodeFlowSession, AzDeviceCodeFlowSession}, bootstrap=false; offset=Second(rand(300:600)))
try
_token(session, bootstrap; offset)
finally
session.lock = false
end
end
#
# API
#
"""
session = AzSession([; kwargs...])
Create an Azure session for authentication using a specific authentication
protocol. The available protocols and their `kwargs` are as follows.
## Authorization code flow
```julia
session = AzSession(;
protocol = _manifest["protocol"] | AzDeviceCodeFlowCredentials,
client_id = AzSessions._manifest["client_id"],
redirect_uri = "http://localhost:44300/reply",
scope = "openid+offline_access+https://storage.azure.com/user_impersonation",
scope_auth = "openid+offline_access+https://management.azure.com/user_impersonation+https://storage.azure.com/user_impersonation",
tenant = AzSessions._manifest["tenant"],
lazy = false,
clearcache = false)
```
## Device code flow
```julia
session = AzSession(;
protocol = AzDeviceCodeCredentials
client_id = AzSessions._manifest["client_id"],
scope = "openid+offline_access+https://management.azure.com/user_impersonation",
scope_auth = "openid+offline_access+https://management.azure.com/user_impersonation+https://storage.azure.com/user_impersonation",
tenant = AzSessions._manifest["tenant"],
clearcache = false)
```
## Client Credentials
```julia
session = AzSession(;
protocol = AzClientCredentials,
tenant=AzSessions._manifest["tenant"],
client_id=AzSessions._manifest["client_id"],
client_secret=AzSessions._manifest["client_secret"],
resource="https://management.azure.com/",
clearcache = false)
```
## VM Credentials
```julia
session = AzSession(;
protocol = AzVMCredentials,
resource = "https://management.azure.com/",
clearcache = false)
```
## New audience
Create a session from an existing auth code flow session or device code flow session,
but with a new scope. This means that we can get a session with a new audience without
requiring re-authentication. Note that the new scope must be in `session.scope_auth`.
```julia
session = AzSession(;
protocol=AzAuthCodeFlowCredentials,
scope_auth="openid+offline_access+https://management.azure.com/user_impersonation+https://storage.azure.com/user_impersonation",
scope="openid+offline_access+https://management.azure.com/user_impersonation")
t = token(session) # token for `https://management.azure.com` audience
session = AzSession(session; scope="openid+offline_access+https://storage.azure.com/user_impersonation")
t = token(session) # token for `https://storage.azure.com` audience without needing to re-authenticate
```
# Notes
* If `lazy=false`, then authenticate at the time of construction. Otherwise, wait until the first use of the session before authenticating.
* If `clearcache=false`, then check the session-cache for an existing token rather than re-authenticating. The cache is stored in a JSON file (`~/.azsessions/sessions.json`).
* The default protocol can be set in the manifest (see the `AzSessions.write_manifest` method for more information).
"""
function AzSession(; protocol=nothing, protocal=nothing, lazy=false, clearcache=false, kwargs...)
protocol = spelling_mistake(protocol, protocal)
load_manifest()
protocol === nothing && (protocol = AzCredentials(spelling_mistake(get(_manifest, "protocol", ""), get(_manifest, "protocal", ""))))
protocol === nothing && (protocol = AzDeviceCodeFlowCredentials)
local session
if protocol == AzClientCredentials
session = AzClientCredentialsSession(;kwargs...)
elseif protocol == AzVMCredentials
session = AzVMSession(;kwargs...)
elseif protocol == AzAuthCodeFlowCredentials
session = AzAuthCodeFlowSession(;kwargs...)
elseif protocol == AzDeviceCodeFlowCredentials
session = AzDeviceCodeFlowSession(;kwargs...)
else
error("Unknown credentials protocol.")
end
clearcache && delete_session(session)
lazy || token(session)
session
end
function AzSession(d::Dict)
protocol = replace(spelling_mistake(get(d, "protocol", ""), get(d, "protocal", "")), "AzSessions."=>"")
if protocol == "AzClientCredentials"
AzClientCredentialsSession(d)
elseif protocol == "AzVMCredentials"
AzVMSession(d)
elseif protocol == "AzAuthCodeFlowCredentials"
AzAuthCodeFlowSession(d)
elseif protocol == "AzDeviceCodeFlowCredentials"
AzDeviceCodeFlowSession(d)
else
error("Unknown credentials protocol: $protocol.")
end
end
AzSession(jsonobject::String) = AzSession(JSON.parse(jsonobject))
AzSession(session::AzSessionAbstract) = session
export AzAuthCodeFlowCredentials, AzClientCredentials, AzDeviceCodeFlowCredentials, AzSession, AzSessionAbstract, AzVMCredentials, scrub!, token
end
| AzSessions | https://github.com/ChevronETC/AzSessions.jl.git |
|
[
"MIT"
] | 2.1.8 | 60064ee64e69dea5fbb380ce80c5714268e1761b | code | 14004 | using AzSessions, Dates, HTTP, JSON, JSONWebTokens, Logging, Test
AzSessions.write_manifest(;client_id=ENV["CLIENT_ID"], client_secret=ENV["CLIENT_SECRET"], tenant=ENV["TENANT"])
function running_on_azure()
try
HTTP.request(
"GET",
"http://169.254.169.254/metadata/instance?api-version=2017-08-01",
Dict("Metadata"=>"true"))
return true
catch
return false
end
end
if running_on_azure()
# TODO - not sure why this doesn't work on CI
@test_skip @testset "AzSessions, VM" begin
session = AzSession(;protocol=AzVMCredentials)
@test now(Dates.UTC) <= session.expiry
t = token(session)
@test isa(t,String)
t2 = token(session)
@test t2 == t
session.token = "x"
session.expiry = now(Dates.UTC) - Dates.Second(1)
t2 = token(session)
@test t2 != "x"
end
end
@testset "AzSessions, Client Credentials" begin
session = AzSession(;protocol=AzClientCredentials, client_id=ENV["CLIENT_ID"], client_secret=ENV["CLIENT_SECRET"])
@test now(Dates.UTC) < session.expiry
t = token(session)
@test isa(t,String)
t2 = token(session)
@test t2 == t
session.token = "x"
session.expiry = now(Dates.UTC) - Dates.Second(1)
t2 = token(session)
@test t2 != "x"
end
@testset "AzSessions, Client Credentials with mis-spelling" begin
session = AzSession(;protocal=AzClientCredentials, client_id=ENV["CLIENT_ID"], client_secret=ENV["CLIENT_SECRET"])
@test now(Dates.UTC) < session.expiry
t = token(session)
@test isa(t,String)
t2 = token(session)
@test t2 == t
session.token = "x"
session.expiry = now(Dates.UTC) - Dates.Second(1)
t2 = token(session)
@test t2 != "x"
end
# TODO: requires user interaction (can we use Mocking.jl)
@test_skip @testset "AzSessions, Device code flow credentials" begin
session = AzSession(;protocol=AzDeviceCodeFlowCredentials)
@test now(Dates.UTC) <= session.expiry
t = token(session)
@test isa(t,String)
t2 = token(session)
@test t2 == t
session.token == "x"
session.expiry = now(Dates.UTC) - Dates.Second(1)
t2 = token(session)
@test t2 != "x"
session2 = AzSession(session;scope="https://storage.azure.com/user_impersonation")
t = token(session2)
decodedJWT = JSONWebTokens.decode(JSONWebTokens.None(), t)
@test decodedJWT["aud"] == "https://storage.azure.com"
end
# TODO - the following testset will only work if the machine can start a web-browser
@test_skip @testset "AzSessions, Authorization code flow credentials" begin
session = AzSession(;protocol=AzAuthCodeFlowCredentials)
@test now(Dates.UTC) <= session.expiry
t = token(session)
@test isa(t,String)
t2 = token(session)
@test t2 == t
session.token == "x"
session.expiry = now(Dates.UTC) - Dates.Second(1)
t2 = token(session)
@test t2 != "x"
session2 = AzSession(session;scope="https://storage.azure.com/user_impersonation")
t = token(session2)
decodedJWT = JSONWebTokens.decode(JSONWebTokens.None(), t)
@test decodedJWT["aud"] == "https://storage.azure.com"
end
# TODO: requires user interaction, can we use Mocking.jl?
@test_skip @testset "AzSessions, Device code flow credentials is the default" begin
session = AzSession()
@test now(Dates.UTC) <= session.expiry
t = token(session)
@test isa(t,String)
t2 = token(session)
@test t2 == t
session.token == "x"
session.expiry = now(Dates.UTC) - Dates.Second(1)
t2 = token(session)
@test t2 != "x"
end
@testset "AzSessions, Client Credentials, serialize" begin
session = AzSessions.AzClientCredentialsSession(
"AzClientCredentials",
"myclientid",
"myclientsecret",
now(),
"myresource",
"mytenant",
"mytoken")
jsonsession = json(session)
_session = AzSession(jsonsession)
@test session.protocol == _session.protocol
@test session.client_id == _session.client_id
@test session.client_secret == _session.client_secret
@test session.expiry == _session.expiry
@test session.resource == _session.resource
@test session.tenant == _session.tenant
@test session.token == _session.token
_session = AzSession(JSON.parse(jsonsession))
@test session.protocol == _session.protocol
@test session.client_id == _session.client_id
@test session.client_secret == _session.client_secret
@test session.expiry == _session.expiry
@test session.resource == _session.resource
@test session.tenant == _session.tenant
@test session.token == _session.token
end
@testset "AzSessions, VM Credentials, serialize" begin
session = AzSessions.AzVMSession(
"AzVMCredentials",
now(),
"myresource",
"mytoken")
jsonsession = json(session)
_session = AzSession(jsonsession)
@test session.protocol == _session.protocol
@test session.expiry == _session.expiry
@test session.resource == _session.resource
@test session.token == _session.token
end
@testset "AzSessions, Auth Code Credentials, serialize" begin
session = AzSessions.AzAuthCodeFlowSession(
"AzAuthCodeFlowCredentials",
"clientid",
now(),
"myidtoken",
false,
"redirecturi",
"refreshtoken",
"scopeauth",
"scopetoken",
"tenant",
"token")
jsonsession = json(session)
_session = AzSession(jsonsession)
@test session.protocol == _session.protocol
@test session.client_id == _session.client_id
@test session.expiry == _session.expiry
@test session.id_token == _session.id_token
@test session.lock == _session.lock
@test session.redirect_uri == _session.redirect_uri
@test session.refresh_token == _session.refresh_token
@test session.scope_auth == _session.scope_auth
@test session.scope == _session.scope
@test session.tenant == _session.tenant
@test session.token == _session.token
_session = AzSession(JSON.parse(jsonsession))
@test session.protocol == _session.protocol
@test session.client_id == _session.client_id
@test session.expiry == _session.expiry
@test session.id_token == _session.id_token
@test session.lock == _session.lock
@test session.redirect_uri == _session.redirect_uri
@test session.refresh_token == _session.refresh_token
@test session.scope_auth == _session.scope_auth
@test session.scope == _session.scope
@test session.tenant == _session.tenant
@test session.token == _session.token
end
@testset "AzSessions, Device Code Credentials, serialize" begin
session = AzSessions.AzDeviceCodeFlowSession(
"AzDeviceCodeFlowCredentials",
"myclientid",
now(),
"myidtoken",
true,
"myrefreshtoken",
"myscope",
"myscope_auth",
"mytenant",
"mytoken")
jsonsession = json(session)
_session = AzSession(jsonsession)
@test session.protocol == _session.protocol
@test session.client_id == _session.client_id
@test session.expiry == _session.expiry
@test session.id_token == _session.id_token
@test session.lock == _session.lock
@test session.refresh_token == _session.refresh_token
@test session.scope == _session.scope
@test session.scope_auth == _session.scope_auth
@test session.tenant == _session.tenant
@test session.token == _session.token
_session = AzSession(JSON.parse(jsonsession))
@test session.protocol == _session.protocol
@test session.client_id == _session.client_id
@test session.expiry == _session.expiry
@test session.id_token == _session.id_token
@test session.lock == _session.lock
@test session.refresh_token == _session.refresh_token
@test session.scope == _session.scope
@test session.scope_auth == _session.scope_auth
@test session.tenant == _session.tenant
@test session.token == _session.token
end
@testset "AzSesions, Client Credentials, copy" begin
session = AzSessions.AzClientCredentialsSession(
"AzClientCredentials",
"myclientid",
"myclientsecret",
now(),
"myresource",
"mytenant",
"mytoken")
_session = copy(session)
@test session.protocol == _session.protocol
@test session.client_id == _session.client_id
@test session.client_secret == _session.client_secret
@test session.expiry == _session.expiry
@test session.resource == _session.resource
@test session.tenant == _session.tenant
@test session.token == _session.token
end
@testset "AzSessions, VM Credentials, copy" begin
session = AzSessions.AzVMSession(
"AzVMCredentials",
now(),
"myresource",
"mytoken")
_session = copy(session)
@test session.protocol == _session.protocol
@test session.expiry == _session.expiry
@test session.resource == _session.resource
@test session.token == _session.token
end
@testset "AzSessions, Auth code flow credentials, copy" begin
session = AzSessions.AzAuthCodeFlowSession(
"AzAuthCodeFlowCredentials",
"clientid",
now(),
"myidtoken",
false,
"redirecturi",
"refreshtoken",
"scopeauth",
"scopetoken",
"tenant",
"token")
_session = copy(session)
@test session.protocol == _session.protocol
@test session.client_id == _session.client_id
@test session.expiry == _session.expiry
@test session.id_token == _session.id_token
@test session.lock == _session.lock
@test session.redirect_uri == _session.redirect_uri
@test session.refresh_token == _session.refresh_token
@test session.scope_auth == _session.scope_auth
@test session.scope == _session.scope
@test session.tenant == _session.tenant
@test session.token == _session.token
end
@testset "AzSessions, Device Code Credentials, copy" begin
session = AzSessions.AzDeviceCodeFlowSession(
"AzDeviceCodeFlowCredentials",
"myclientid",
now(),
"myidtoken",
true,
"myrefreshtoken",
"myscope",
"myscope_auth",
"mytenant",
"mytoken")
_session = copy(session)
@test session.protocol == _session.protocol
@test session.client_id == _session.client_id
@test session.expiry == _session.expiry
@test session.id_token == _session.id_token
@test session.lock == _session.lock
@test session.refresh_token == _session.refresh_token
@test session.scope == _session.scope
@test session.scope_auth == _session.scope_auth
@test session.tenant == _session.tenant
@test session.token == _session.token
end
@testset "AzSessions, merge scopes" begin
session = AzSession(;
scope="openid+offline_access+https://management.azure.com/user_impersonation",
scope_auth = "openid+offline_access+https://storage.azure.com/user_impersonation",
lazy = true)
scopes = split(session.scope_auth, '+')
@test length(scopes) == 4
@test length(unique(scopes)) == 4
for scope in scopes
@test scope ∈ ["openid","offline_access","https://management.azure.com/user_impersonation","https://storage.azure.com/user_impersonation"]
end
end
@testset "AzSessions, VM credentials, scrub" begin
session = AzSessions.AzClientCredentialsSession(
"AzClientCredentials",
"myclientid",
"myclientsecret",
now(),
"myresource",
"mytenant",
"mytoken")
scrub!(session)
@test session.client_secret == ""
@test session.token == ""
end
@testset "AzSessions, Auth code flow, scrub" begin
session = AzSessions.AzAuthCodeFlowSession(
"AzAuthCodeFlowCredentials",
"clientid",
now(),
"myidtoken",
false,
"redirecturi",
"refreshtoken",
"scopeauth",
"scopetoken",
"tenant",
"token")
scrub!(session)
@test session.id_token == ""
@test session.refresh_token == ""
@test session.token == ""
end
@testset "AzSessions, Device code flow, scrub" begin
session = AzSessions.AzDeviceCodeFlowSession(
"AzDeviceCodeFlowCredentials",
"myclientid",
now(),
"myidtoken",
true,
"myrefreshtoken",
"myscope",
"myscope_auth",
"mytenant",
"mytoken")
scrub!(session)
@test session.id_token == ""
@test session.refresh_token == ""
@test session.token == ""
end
@testset "AzSessions, write_manifest" begin
AzSessions.write_manifest(;client_id="myclientid", client_secret="myclientsecret", tenant="mytenant")
manifest = JSON.parse(read(AzSessions.manifestfile(), String))
@test manifest["client_id"] == "myclientid"
@test manifest["client_secret"] == "myclientsecret"
@test manifest["tenant"] == "mytenant"
end
AzSessions.write_manifest(;client_id=ENV["CLIENT_ID"], client_secret=ENV["CLIENT_SECRET"], tenant=ENV["TENANT"], protocol="AzClientCredentials")
@testset "AzSessions, Client Credentials is the default" begin
session = AzSession()
@test session.protocol == "AzClientCredentials"
@test now(Dates.UTC) < session.expiry
t = token(session)
@test isa(t,String)
t2 = token(session)
@test t2 == t
session.token = "x"
session.expiry = now(Dates.UTC) - Dates.Second(1)
t2 = token(session)
@test t2 != "x"
end
AzSessions.write_manifest(;client_id=ENV["CLIENT_ID"], client_secret=ENV["CLIENT_SECRET"], tenant=ENV["TENANT"], protocol="")
@testset "AzSessions, retrywarn" begin
e = HTTP.StatusError(401, "GET", "https://foo", HTTP.Response(401, "body"))
io = IOBuffer()
logger = ConsoleLogger(io, Logging.Info)
with_logger(logger) do
AzSessions.retrywarn(2, 60, e)
end
s = String(take!(io))
@test contains(s, "401")
end
| AzSessions | https://github.com/ChevronETC/AzSessions.jl.git |
|
[
"MIT"
] | 2.1.8 | 60064ee64e69dea5fbb380ce80c5714268e1761b | docs | 1326 | # AzSessions.jl contributor guidelines
## Issue reporting
If you have found a bug in AzSessions.jl or have any suggestions for changes to
AzSessions.jl functionality, please consider filing an issue using the GitHub
isssue tracker. Please do not forget to search for an existing issue
which may already cover your suggestion.
## Contributing
We try to follow GitHub flow (https://guides.github.com/introduction/flow/) for
making changes to AzSessions.jl.
Contributors retain copyright on their contributions, and the MIT license
(https://opensource.org/licenses/MIT) applies to the contribution.
The basic steps to making a contribution are as follows, and assume some knowledge of
git:
1. fork the AzSessions.jl repository
2. create an appropriately titled branch for your contribution
3. if applicable, add a unit-test to ensure the functionality of your contribution
(see the `test` subfolder).
4. run `]test AzSessions` in the `test` folder
5. make a pull-request
6. have fun
## Coding conventions
We try to follow the same coding conventions as https://github.com/JuliaLang/julia.
This primarily means using 4 spaces to indent (no tabs). In addition, we make a
best attempt to follow the guidelines in the style guide chapter of the julia
manual: https://docs.julialang.org/en/v1/manual/style-guide/
| AzSessions | https://github.com/ChevronETC/AzSessions.jl.git |
|
[
"MIT"
] | 2.1.8 | 60064ee64e69dea5fbb380ce80c5714268e1761b | docs | 1310 | # AzSessions
| **Documentation** | **Action Statuses** |
|:---:|:---:|
| [![][docs-dev-img]][docs-dev-url] [![][docs-stable-img]][docs-stable-url] | [![][doc-build-status-img]][doc-build-status-url] [![][build-status-img]][build-status-url] [![][code-coverage-img]][code-coverage-results] |
Authentication for Azure Cloud using Active Directory (OAuth2). This package
supports 1) VM credentials, 2) client credentials, 3) authorization code flow and 4) device
code flow.
[docs-dev-img]: https://img.shields.io/badge/docs-dev-blue.svg
[docs-dev-url]: https://chevronetc.github.io/AzSessions.jl/dev/
[docs-stable-img]: https://img.shields.io/badge/docs-stable-blue.svg
[docs-stable-url]: https://ChevronETC.github.io/AzSessions.jl/stable
[doc-build-status-img]: https://github.com/ChevronETC/AzSessions.jl/workflows/Documentation/badge.svg
[doc-build-status-url]: https://github.com/ChevronETC/AzSessions.jl/actions?query=workflow%3ADocumentation
[build-status-img]: https://github.com/ChevronETC/AzSessions.jl/workflows/Tests/badge.svg
[build-status-url]: https://github.com/ChevronETC/AzSessions.jl/actions?query=workflow%3A"Tests"
[code-coverage-img]: https://codecov.io/gh/ChevronETC/AzSessions.jl/branch/master/graph/badge.svg
[code-coverage-results]: https://codecov.io/gh/ChevronETC/AzSessions.jl
| AzSessions | https://github.com/ChevronETC/AzSessions.jl.git |
|
[
"MIT"
] | 2.1.8 | 60064ee64e69dea5fbb380ce80c5714268e1761b | docs | 586 | # Examples
```julia
using AzSessions
# VM identity authentication
session = AzSession(;protocol=AzVMCredentials)
t = token(session)
# Client credentials authentication
session = AzSession(;protocol=AzClientCredentials, client_id="myclientid", client_secret="xxxxxxxxxxxxxxx")
t = token(session)
# Device code flow authentication
session = AzSession()
t = token(session)
# ...or...
session = AzSession(;protocol=AzDeviceCodeFlowCredentials)
t = token(session)
# Authorization code flow authentication
session = AzSession(;protocol=AzAuthCodeFlowCredentials)
t = token(session)
``` | AzSessions | https://github.com/ChevronETC/AzSessions.jl.git |
|
[
"MIT"
] | 2.1.8 | 60064ee64e69dea5fbb380ce80c5714268e1761b | docs | 1542 | # AzSessions
Authentication for Azure Cloud using Active Directory (OAuth2). At
present, this package supports 1) VM credentials, 2) client
credentials, 3) authorization code flow and 4) device code flow.
## Setup
AzSessions keeps state in the `~/.azsessions` folder . In particular,
it 1) uses `~/.azsession/sessions.json` to store OAuth2
tokens, and 2) uses `~/.azsession/manifest.json` to store
information specific to your Azure account.
Use AzSessions to create the `manifest.json` file:
```julia
AzSessions.write_manifest(;client_id="myclientid", client_secret="myclientsecret", tenant="mytenant")
```
or in the case that you do not have access to the `client_secret`:
```julia
AzSessions.write_manifest(;client_id="myclientid", tenant="mytenant")
```
Once the `manifest.json` file exists, AzSessions will use its values as defaults.
For example, when using client credentials to authenticate, AzSessions will use
the `client_id`, `client_secret` and `tenant` in `manifest.json`. On the other hand,
when using the authorization code flow or the device code flow, AzSessions will use
the `client_id` and the `tenant` but will not use the `client_secret`.The later is
especially useful if you are working in an environment where your adminstrator does not
share the `client_secret` with the users.
Note that the manifest can also be used to store your preferred protocol. For example:
```julia
AzSessions.write_manifest(;client_id="myclientid", client_secret="mycientsecret", tenant="mytenant", protocol=AzClientCredentials)
``` | AzSessions | https://github.com/ChevronETC/AzSessions.jl.git |
|
[
"MIT"
] | 2.1.8 | 60064ee64e69dea5fbb380ce80c5714268e1761b | docs | 962 | # Further reading
For more information about OAuth2 and authentication for Azure, please refer to:
<https://docs.microsoft.com/en-us/rest/api/storageservices/authenticate-with-azure-active-directory>
For more information on authentication via a VM identity:
<https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token>
For more information on authentication via client credentials:
<https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow>
Please note that we only support authentication via a shared-secret, and not via
a certificate.
For more information on authentication via the authorization code grant flow:
<https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-auth-code-flow>
For more information on authentication via the device code flow:
<https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-device-code> | AzSessions | https://github.com/ChevronETC/AzSessions.jl.git |
|
[
"MIT"
] | 2.1.8 | 60064ee64e69dea5fbb380ce80c5714268e1761b | docs | 75 | # Reference
```@docs
AzSession
token
scrub!
AzSessions.write_manifest
```
| AzSessions | https://github.com/ChevronETC/AzSessions.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | code | 637 | #!/usr/bin/env julia
# Root of the repository
const repo_root = dirname(@__DIR__)
# Make sure docs environment is active and instantiated
import Pkg
Pkg.activate(@__DIR__)
Pkg.instantiate()
# Communicate with docs/make.jl that we are running in live mode
push!(ARGS, "liveserver")
# Run LiveServer.servedocs(...)
import LiveServer
LiveServer.servedocs(;
# Documentation root where make.jl and src/ are located
foldername = joinpath(repo_root, "docs"),
# Extra source folder to watch for changes
include_dirs = [
# Watch the src folder so docstrings can be Revise'd
joinpath(repo_root, "src"),
]
)
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | code | 550 | push!(LOAD_PATH,"../src/")
using Documenter, FerriteViz, WGLMakie
makedocs(sitename="FerriteViz",
modules=[FerriteViz],
authors="Maximilian Köhler",
format=Documenter.HTML(prettyurls=false),
pages=["Home"=> "index.md",
"Tutorial" => "tutorial.md",
"Advanced Topics" => "atopics.md",
"api.md",
"devdocs.md",
],
)
deploydocs(repo = "github.com/Ferrite-FEM/FerriteViz.jl.git",
push_preview=true,
forcepush=true,)
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | code | 2367 | using Ferrite, SparseArrays
function assemble_heat_element!(Ke::Matrix, fe::Vector, cellvalues::CellScalarValues, coords::Vector, rhs::Function)
n_basefuncs = getnbasefunctions(cellvalues)
fill!(Ke, 0)
fill!(fe, 0)
for q_point in 1:getnquadpoints(cellvalues)
dΩ = getdetJdV(cellvalues, q_point)
x = spatial_coordinate(cellvalues, q_point, coords)
for i in 1:n_basefuncs
δu = shape_value(cellvalues, q_point, i)
∇δu = shape_gradient(cellvalues, q_point, i)
fe[i] += rhs(x) * δu * dΩ
for j in 1:n_basefuncs
∇u = shape_gradient(cellvalues, q_point, j)
Ke[i, j] += (∇δu ⋅ ∇u) * dΩ
end
end
end
return Ke, fe
end
function assemble_steady_heat_global(cellvalues::CellScalarValues, K::SparseMatrixCSC, dh::DofHandler, rhs::Function)
n_basefuncs = getnbasefunctions(cellvalues)
Ke = zeros(n_basefuncs, n_basefuncs)
fe = zeros(n_basefuncs)
f = zeros(ndofs(dh))
assembler = start_assemble(K, f)
for cell in CellIterator(dh)
reinit!(cellvalues, cell)
assemble_heat_element!(Ke, fe, cellvalues, getcoordinates(cell), rhs)
assemble!(assembler, celldofs(cell), Ke, fe)
end
return K, f
end
function manufactured_heat_problem(element_type, ip, num_elements_per_dim)
dim = Ferrite.getdim(ip)
grid = generate_grid(element_type, ntuple(x->num_elements_per_dim, dim));
ip_geo = Ferrite.default_interpolation(typeof(grid.cells[1]))
qr = QuadratureRule{dim, Ferrite.getrefshape(ip)}(2*Ferrite.getorder(ip))
cellvalues = CellScalarValues(qr, ip, ip_geo);
∂Ω = union(
getfaceset(grid, "left"),
getfaceset(grid, "right"),
getfaceset(grid, "top"),
getfaceset(grid, "bottom"),
);
if dim == 3
∂Ω = union(
∂Ω,
getfaceset(grid, "front"),
getfaceset(grid, "back")
)
end
dh = DofHandler(grid)
push!(dh, :u, 1, ip)
close!(dh);
K = create_sparsity_pattern(dh)
ch = ConstraintHandler(dh);
dbc = Dirichlet(:u, ∂Ω, (x, t) -> 0)
add!(ch, dbc);
close!(ch)
update!(ch, 0.0);
K, f = assemble_steady_heat_global(cellvalues, K, dh, x->(π/2)^2 * dim * prod(cos, x*π/2));
apply!(K, f, ch)
u = K \ f;
return dh, u
end
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | code | 5905 | using Ferrite
using BlockArrays, SparseArrays, LinearAlgebra
function create_cook_grid(nx, ny)
corners = [Tensors.Vec{2}((0.0, 0.0)),
Tensors.Vec{2}((48.0, 44.0)),
Tensors.Vec{2}((48.0, 60.0)),
Tensors.Vec{2}((0.0, 44.0))]
grid = generate_grid(Quadrilateral, (nx, ny), corners);
# facesets for boundary conditions
addfaceset!(grid, "clamped", x -> norm(x[1]) ≈ 0.0);
addfaceset!(grid, "traction", x -> norm(x[1]) ≈ 48.0);
return grid
end;
function create_values(interpolation_u, interpolation_p)
# quadrature rules
qr = QuadratureRule{2,RefCube}(3)
face_qr = QuadratureRule{1,RefCube}(3)
# geometric interpolation
interpolation_geom = Lagrange{2,RefCube,1}()
# cell and facevalues for u
cellvalues_u = CellVectorValues(qr, interpolation_u, interpolation_geom)
facevalues_u = FaceVectorValues(face_qr, interpolation_u, interpolation_geom)
# cellvalues for p
cellvalues_p = CellScalarValues(qr, interpolation_p, interpolation_geom)
return cellvalues_u, cellvalues_p, facevalues_u
end;
function create_dofhandler(grid, ipu, ipp)
dh = DofHandler(grid)
push!(dh, :u, 2, ipu) # displacement
push!(dh, :p, 1, ipp) # pressure
close!(dh)
return dh
end;
function create_bc(dh)
dbc = ConstraintHandler(dh)
add!(dbc, Dirichlet(:u, getfaceset(dh.grid, "clamped"), (x,t) -> zero(Tensors.Vec{2}), [1,2]))
close!(dbc)
t = 0.0
update!(dbc, t)
return dbc
end;
struct LinearElasticity{T}
G::T
K::T
end
function doassemble(cellvalues_u::CellVectorValues{dim}, cellvalues_p::CellScalarValues{dim},
facevalues_u::FaceVectorValues{dim}, K::SparseMatrixCSC, grid::Grid,
dh::DofHandler, mp::LinearElasticity) where {dim}
f = zeros(ndofs(dh))
assembler = start_assemble(K, f)
nu = getnbasefunctions(cellvalues_u)
np = getnbasefunctions(cellvalues_p)
fe = PseudoBlockArray(zeros(nu + np), [nu, np]) # local force vector
ke = PseudoBlockArray(zeros(nu + np, nu + np), [nu, np], [nu, np]) # local stiffness matrix
# traction vector
t = Tensors.Vec{2}((0.0, 1/16))
# cache ɛdev outside the element routine to avoid some unnecessary allocations
ɛdev = [zero(SymmetricTensor{2, dim}) for i in 1:getnbasefunctions(cellvalues_u)]
for cell in CellIterator(dh)
fill!(ke, 0)
fill!(fe, 0)
assemble_up!(ke, fe, cell, cellvalues_u, cellvalues_p, facevalues_u, grid, mp, ɛdev, t)
assemble!(assembler, celldofs(cell), fe, ke)
end
return K, f
end;
function assemble_up!(Ke, fe, cell, cellvalues_u, cellvalues_p, facevalues_u, grid, mp, ɛdev, t)
n_basefuncs_u = getnbasefunctions(cellvalues_u)
n_basefuncs_p = getnbasefunctions(cellvalues_p)
u▄, p▄ = 1, 2
reinit!(cellvalues_u, cell)
reinit!(cellvalues_p, cell)
# We only assemble lower half triangle of the stiffness matrix and then symmetrize it.
@inbounds for q_point in 1:getnquadpoints(cellvalues_u)
for i in 1:n_basefuncs_u
ɛdev[i] = dev(symmetric(shape_gradient(cellvalues_u, q_point, i)))
end
dΩ = getdetJdV(cellvalues_u, q_point)
for i in 1:n_basefuncs_u
divδu = shape_divergence(cellvalues_u, q_point, i)
δu = shape_value(cellvalues_u, q_point, i)
for j in 1:i
Ke[BlockIndex((u▄, u▄), (i, j))] += 2 * mp.G * ɛdev[i] ⊡ ɛdev[j] * dΩ
end
end
for i in 1:n_basefuncs_p
δp = shape_value(cellvalues_p, q_point, i)
for j in 1:n_basefuncs_u
divδu = shape_divergence(cellvalues_u, q_point, j)
Ke[BlockIndex((p▄, u▄), (i, j))] += -δp * divδu * dΩ
end
for j in 1:i
p = shape_value(cellvalues_p, q_point, j)
Ke[BlockIndex((p▄, p▄), (i, j))] += - 1/mp.K * δp * p * dΩ
end
end
end
symmetrize_lower!(Ke)
# We integrate the Neumann boundary using the facevalues.
# We loop over all the faces in the cell, then check if the face
# is in our `"traction"` faceset.
@inbounds for face in 1:nfaces(cell)
if onboundary(cell, face) && (cellid(cell), face) ∈ getfaceset(grid, "traction")
reinit!(facevalues_u, cell, face)
for q_point in 1:getnquadpoints(facevalues_u)
dΓ = getdetJdV(facevalues_u, q_point)
for i in 1:n_basefuncs_u
δu = shape_value(facevalues_u, q_point, i)
fe[i] += (δu ⋅ t) * dΓ
end
end
end
end
end
function symmetrize_lower!(K)
for i in 1:size(K,1)
for j in i+1:size(K,1)
K[i,j] = K[j,i]
end
end
end;
function solve(interpolation_u, interpolation_p, mp)
# grid, dofhandler, boundary condition
n = 50
grid = create_cook_grid(n, n)
dh = create_dofhandler(grid, interpolation_u, interpolation_p)
dbc = create_bc(dh)
# cellvalues
cellvalues_u, cellvalues_p, facevalues_u = create_values(interpolation_u, interpolation_p)
# assembly and solve
K = create_sparsity_pattern(dh);
K, f = doassemble(cellvalues_u, cellvalues_p, facevalues_u, K, grid, dh, mp);
apply!(K, f, dbc)
u = Symmetric(K) \ f;
# export
filename = "cook_" * (isa(interpolation_u, Lagrange{2,RefCube,1}) ? "linear" : "quadratic") *
"_linear"
vtk_grid(filename, dh) do vtkfile
vtk_point_data(vtkfile, dh, u)
end
return u,dh
end
linear = Lagrange{2,RefCube,1}()
quadratic = Lagrange{2,RefCube,2}()
ν = 0.4999999
Emod = 1.
Gmod = Emod / 2(1 + ν)
Kmod = Emod * ν / ((1+ν) * (1-2ν))
mp = LinearElasticity(Gmod, Kmod)
u_linear,dh_linear = solve(linear, linear, mp);
u_quadratic,dh_quadratic = solve(quadratic, linear, mp);
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | code | 2418 | using FerriteGmsh
using Ferrite
gmsh.initialize()
gmsh.option.setNumber("General.Terminal", 1)
gmsh.model.add("demo")
lc = 0.2
gmsh.model.geo.addPoint(-0.5, -1, 0, lc, 1)
gmsh.model.geo.addPoint(0.5, -1, 0, lc, 2)
gmsh.model.geo.addPoint(-0.5, 0, 0, lc, 3)
gmsh.model.geo.addPoint(0.5, 0, 0, lc, 4)
gmsh.model.geo.addPoint(-0.5, 1, 0, lc, 5)
gmsh.model.geo.addPoint(0.5, 1, 0, lc, 6)
gmsh.model.geo.addLine(1, 2, 1)
gmsh.model.geo.addLine(2, 4, 2)
gmsh.model.geo.addLine(4, 3, 3)
gmsh.model.geo.addLine(1, 3, 4)
gmsh.model.geo.addLine(3, 5, 5)
gmsh.model.geo.addLine(5, 6, 6)
gmsh.model.geo.addLine(4, 6, 7)
gmsh.model.geo.addCurveLoop([1, 2, 3, -4], 1)
gmsh.model.geo.addCurveLoop([-3, 7, -6, -5], 2)
gmsh.model.geo.addPlaneSurface([1], 1)
gmsh.model.geo.addPlaneSurface([2], 2)
gmsh.model.geo.mesh.setTransfiniteCurve(1, 3)
gmsh.model.geo.mesh.setTransfiniteCurve(2, 3)
gmsh.model.geo.mesh.setTransfiniteCurve(3, 3)
gmsh.model.geo.mesh.setTransfiniteCurve(4, 3)
gmsh.model.geo.mesh.setTransfiniteCurve(5, 3)
gmsh.model.geo.mesh.setTransfiniteCurve(6, 3)
gmsh.model.geo.mesh.setTransfiniteCurve(7, 3)
gmsh.model.geo.mesh.setTransfiniteSurface(1)
gmsh.model.geo.mesh.setRecombine(2, 1)
gmsh.model.addPhysicalGroup(2, [1], 1)
gmsh.model.setPhysicalName(2, 1, "quad")
gmsh.model.addPhysicalGroup(2, [2], 2)
gmsh.model.setPhysicalName(2, 2, "triangle")
gmsh.model.addPhysicalGroup(1, [6], 3)
gmsh.model.setPhysicalName(1, 3, "top")
gmsh.model.addPhysicalGroup(1, [1], 4)
gmsh.model.setPhysicalName(1, 4, "bottom")
gmsh.model.geo.synchronize()
gmsh.model.mesh.generate(2)
nodes = tonodes()
elements, gmsh_eleidx = toelements(2)
boundarydict = toboundary(1)
facesets = tofacesets(boundarydict,elements)
cellsets = tocellsets(2,gmsh_eleidx)
grid = Grid(elements,nodes,facesets=facesets,cellsets=cellsets)
dh = MixedDofHandler(grid)
push!(dh,FieldHandler([Field(:p,Lagrange{2,RefTetrahedron,1}(),1),Field(:u,Lagrange{2,RefTetrahedron,1}(),2)], getcellset(grid,"triangle")))
push!(dh,FieldHandler([Field(:u,Lagrange{2,RefCube,1}(),2)], getcellset(grid,"quad")))
close!(dh)
u = zeros(ndofs(dh))
for cell in CellIterator(dh,collect(dh.fieldhandlers[1].cellset))
celldofs_ = celldofs(cell)
u[celldofs_] .= 1
end
for cell in CellIterator(dh,collect(dh.fieldhandlers[2].cellset))
celldofs_ = celldofs(cell)
dof_range_ = Ferrite.dof_range(dh.fieldhandlers[2],:u)
u[celldofs_[dof_range_]] .= 0.5
end
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | code | 10329 | using Ferrite, SparseArrays, LinearAlgebra, FerriteViz
struct J2Plasticity{T, S <: SymmetricTensor{4, 3, T}}
G::T # Shear modulus
K::T # Bulk modulus
σ₀::T # Initial yield limit
H::T # Hardening modulus
Dᵉ::S # Elastic stiffness tensor
end;
function J2Plasticity(E, ν, σ₀, H)
δ(i,j) = i == j ? 1.0 : 0.0 # helper function
G = E / 2(1 + ν)
K = E / 3(1 - 2ν)
Isymdev(i,j,k,l) = 0.5*(δ(i,k)*δ(j,l) + δ(i,l)*δ(j,k)) - 1.0/3.0*δ(i,j)*δ(k,l)
temp(i,j,k,l) = 2.0G *( 0.5*(δ(i,k)*δ(j,l) + δ(i,l)*δ(j,k)) + ν/(1.0-2.0ν)*δ(i,j)*δ(k,l))
Dᵉ = SymmetricTensor{4, 3}(temp)
return J2Plasticity(G, K, σ₀, H, Dᵉ)
end;
mutable struct MaterialState{T, S <: SecondOrderTensor{3, T}}
# Store "converged" values
ϵᵖ::S # plastic strain
σ::S # stress
k::T # hardening variable
# Store temporary values used during equilibrium iterations
temp_ϵᵖ::S
temp_σ::S
temp_k::T
end
function MaterialState()
return MaterialState(
zero(SymmetricTensor{2, 3}),
zero(SymmetricTensor{2, 3}),
0.0,
zero(SymmetricTensor{2, 3}),
zero(SymmetricTensor{2, 3}),
0.0)
end
function update_state!(state::MaterialState)
state.ϵᵖ = state.temp_ϵᵖ
state.σ = state.temp_σ
state.k = state.temp_k
end;
function vonMises(σ)
s = dev(σ)
return sqrt(3.0/2.0 * s ⊡ s)
end;
function compute_stress_tangent(ϵ::SymmetricTensor{2, 3}, material::J2Plasticity, state::MaterialState)
# unpack some material parameters
G = material.G
K = material.K
H = material.H
# We use (•)ᵗ to denote *trial*-values
σᵗ = material.Dᵉ ⊡ (ϵ - state.ϵᵖ) # trial-stress
sᵗ = dev(σᵗ) # deviatoric part of trial-stress
J₂ = 0.5 * sᵗ ⊡ sᵗ # second invariant of sᵗ
σᵗₑ = sqrt(3.0*J₂) # effetive trial-stress (von Mises stress)
σʸ = material.σ₀ + H * state.k # Previous yield limit
φᵗ = σᵗₑ - σʸ # Trial-value of the yield surface
if φᵗ < 0.0 # elastic loading
state.temp_σ = σᵗ
return state.temp_σ, material.Dᵉ
else # plastic loading
h = H + 3G
μ = φᵗ / h # plastic multiplier
c1 = 1 - 3G * μ / σᵗₑ
s = c1 * sᵗ # updated deviatoric stress
σ = s + vol(σᵗ) # updated stress
# Compute algorithmic tangent stiffness ``D = \frac{\Delta \sigma }{\Delta \epsilon}``
κ = H * (state.k + μ) # drag stress
σₑ = material.σ₀ + κ # updated yield surface
δ(i,j) = i == j ? 1.0 : 0.0
Isymdev(i,j,k,l) = 0.5*(δ(i,k)*δ(j,l) + δ(i,l)*δ(j,k)) - 1.0/3.0*δ(i,j)*δ(k,l)
Q(i,j,k,l) = Isymdev(i,j,k,l) - 3.0 / (2.0*σₑ^2) * s[i,j]*s[k,l]
b = (3G*μ/σₑ) / (1.0 + 3G*μ/σₑ)
Dtemp(i,j,k,l) = -2G*b * Q(i,j,k,l) - 9G^2 / (h*σₑ^2) * s[i,j]*s[k,l]
D = material.Dᵉ + SymmetricTensor{4, 3}(Dtemp)
# Store outputs in the material state
Δϵᵖ = 3/2 *μ / σₑ*s # plastic strain
state.temp_ϵᵖ = state.ϵᵖ + Δϵᵖ # plastic strain
state.temp_k = state.k + μ # hardening variable
state.temp_σ = σ # updated stress
return state.temp_σ, D
end
end
function create_values(interpolation)
# setup quadrature rules
qr = QuadratureRule{3,RefTetrahedron}(2)
face_qr = QuadratureRule{2,RefTetrahedron}(3)
# create geometric interpolation (use the same as for u)
interpolation_geom = Lagrange{3,RefTetrahedron,1}()
# cell and facevalues for u
cellvalues_u = CellVectorValues(qr, interpolation, interpolation_geom)
facevalues_u = FaceVectorValues(face_qr, interpolation, interpolation_geom)
return cellvalues_u, facevalues_u
end;
function create_dofhandler(grid, interpolation)
dh = DofHandler(grid)
dim = 3
push!(dh, :u, dim, interpolation) # add a displacement field with 3 components
close!(dh)
return dh
end
function create_bc(dh, grid)
dbcs = ConstraintHandler(dh)
# Clamped on the left side
dofs = [1, 2, 3]
dbc = Dirichlet(:u, getfaceset(grid, "left"), (x,t) -> [0.0, 0.0, 0.0], dofs)
add!(dbcs, dbc)
close!(dbcs)
return dbcs
end;
function doassemble(cellvalues::CellVectorValues{dim},
facevalues::FaceVectorValues{dim}, K::SparseMatrixCSC, grid::Grid,
dh::DofHandler, material::J2Plasticity, u, states, t) where {dim}
r = zeros(ndofs(dh))
assembler = start_assemble(K, r)
nu = getnbasefunctions(cellvalues)
re = zeros(nu) # element residual vector
ke = zeros(nu, nu) # element tangent matrix
for (cell, state) in zip(CellIterator(dh), states)
fill!(ke, 0)
fill!(re, 0)
eldofs = celldofs(cell)
ue = u[eldofs]
assemble_cell!(ke, re, cell, cellvalues, facevalues, grid, material,
ue, state, t)
assemble!(assembler, eldofs, re, ke)
end
return K, r
end
function assemble_cell!(Ke, re, cell, cellvalues, facevalues, grid, material,
ue, state, t)
n_basefuncs = getnbasefunctions(cellvalues)
reinit!(cellvalues, cell)
for q_point in 1:getnquadpoints(cellvalues)
# For each integration point, compute stress and material stiffness
∇u = function_gradient(cellvalues, q_point, ue)
ϵ = symmetric(∇u) # Total strain
σ, D = compute_stress_tangent(ϵ, material, state[q_point])
dΩ = getdetJdV(cellvalues, q_point)
for i in 1:n_basefuncs
δϵ = symmetric(shape_gradient(cellvalues, q_point, i))
re[i] += (δϵ ⊡ σ) * dΩ # add internal force to residual
for j in 1:i
Δϵ = symmetric(shape_gradient(cellvalues, q_point, j))
Ke[i, j] += δϵ ⊡ D ⊡ Δϵ * dΩ
end
end
end
symmetrize_lower!(Ke)
# Add traction as a negative contribution to the element residual `re`:
for face in 1:nfaces(cell)
if onboundary(cell, face) && (cellid(cell), face) ∈ getfaceset(grid, "right")
reinit!(facevalues, cell, face)
for q_point in 1:getnquadpoints(facevalues)
dΓ = getdetJdV(facevalues, q_point)
for i in 1:n_basefuncs
δu = shape_value(facevalues, q_point, i)
re[i] -= (δu ⋅ t) * dΓ
end
end
end
end
end
function symmetrize_lower!(K)
for i in 1:size(K,1)
for j in i+1:size(K,1)
K[i,j] = K[j,i]
end
end
end;
function solve()
# Define material parameters
E = 200.0e9 # [Pa]
H = E/20 # [Pa]
ν = 0.3 # [-]
σ₀ = 200e6 # [Pa]
material = J2Plasticity(E, ν, σ₀, H)
L = 10.0 # beam length [m]
w = 1.0 # beam width [m]
h = 1.0 # beam height[m]
n_timesteps = 100
u_max = zeros(n_timesteps)
traction_magnitude = 1.e7 * range(0.5, 1.0, length=n_timesteps)
# Create geometry, dofs and boundary conditions
n = 2
nels = (10n, n, 2n) # number of elements in each spatial direction
P1 = Vec((0.0, 0.0, 0.0)) # start point for geometry
P2 = Vec((L, w, h)) # end point for geometry
grid = generate_grid(Tetrahedron, nels, P1, P2)
interpolation = Lagrange{3, RefTetrahedron, 1}() # Linear tet with 3 unknowns/node
dh = create_dofhandler(grid, interpolation) # JuaFEM helper function
dbcs = create_bc(dh, grid) # create Dirichlet boundary-conditions
cellvalues, facevalues = create_values(interpolation)
# Pre-allocate solution vectors, etc.
n_dofs = ndofs(dh) # total number of dofs
u = zeros(n_dofs) # solution vector
plotter = MakiePlotter(dh,u)
fig = ferriteviewer(plotter)
display(fig)
Δu = zeros(n_dofs) # displacement correction
r = zeros(n_dofs) # residual
K = create_sparsity_pattern(dh); # tangent stiffness matrix
# Create material states. One array for each cell, where each element is an array of material-
# states - one for each integration point
nqp = getnquadpoints(cellvalues)
states = [[MaterialState() for _ in 1:nqp] for _ in 1:getncells(grid)]
# states = [MaterialState() for _ in 1:nqp for _ in 1:getncells(grid)]
# temp_states = [MaterialState() for _ in 1:nqp for _ in 1:getncells(grid)]
# Newton-Raphson loop
NEWTON_TOL = 1 # 1 N
for timestep in 1:n_timesteps
t = timestep # actual time (used for evaluating d-bndc)
traction = Vec((0.0, 0.0, traction_magnitude[timestep]))
newton_itr = -1
update!(dbcs, t) # evaluates the D-bndc at time t
apply!(u, dbcs) # set the prescribed values in the solution vector
while true; newton_itr += 1
if newton_itr > 8
error("Reached maximum Newton iterations, aborting")
break
end
K, r = doassemble(cellvalues, facevalues, K, grid, dh, material, u,
states, traction);
norm_r = norm(r[Ferrite.free_dofs(dbcs)])
if norm_r < NEWTON_TOL
break
end
apply_zero!(K, r, dbcs)
Δu = Symmetric(K) \ r
u -= Δu
end
FerriteViz.update!(plotter,u)
sleep(0.1)
# Update all the material states after we have reached equilibrium
for cell_states in states
foreach(update_state!, cell_states)
end
u_max[timestep] = max(abs.(u)...) # maximum displacement in current timestep
end
# ## Postprocessing
# Only a vtu-file corrsponding to the last time-step is exported.
#
# The following is a quick (and dirty) way of extracting average cell data for export.
mises_values = zeros(getncells(grid))
κ_values = zeros(getncells(grid))
for (el, cell_states) in enumerate(states)
for state in cell_states
mises_values[el] += vonMises(state.σ)
κ_values[el] += state.k*material.H
end
mises_values[el] /= length(cell_states) # average von Mises stress
κ_values[el] /= length(cell_states) # average drag stress
end
return u, dh, traction_magnitude
end
u, dh, traction_magnitude = solve();
# This file was generated using Literate.jl, https://github.com/fredrikekre/Literate.jl
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | code | 10150 | using Ferrite, SparseArrays, LinearAlgebra, FerriteViz
struct J2Plasticity{T, S <: SymmetricTensor{4, 3, T}}
G::T # Shear modulus
K::T # Bulk modulus
σ₀::T # Initial yield limit
H::T # Hardening modulus
Dᵉ::S # Elastic stiffness tensor
end;
function J2Plasticity(E, ν, σ₀, H)
δ(i,j) = i == j ? 1.0 : 0.0 # helper function
G = E / 2(1 + ν)
K = E / 3(1 - 2ν)
Isymdev(i,j,k,l) = 0.5*(δ(i,k)*δ(j,l) + δ(i,l)*δ(j,k)) - 1.0/3.0*δ(i,j)*δ(k,l)
temp(i,j,k,l) = 2.0G *( 0.5*(δ(i,k)*δ(j,l) + δ(i,l)*δ(j,k)) + ν/(1.0-2.0ν)*δ(i,j)*δ(k,l))
Dᵉ = SymmetricTensor{4, 3}(temp)
return J2Plasticity(G, K, σ₀, H, Dᵉ)
end;
mutable struct MaterialState{T, S <: SecondOrderTensor{3, T}}
# Store "converged" values
ϵᵖ::S # plastic strain
σ::S # stress
k::T # hardening variable
# Store temporary values used during equilibrium iterations
temp_ϵᵖ::S
temp_σ::S
temp_k::T
end
function MaterialState()
return MaterialState(
zero(SymmetricTensor{2, 3}),
zero(SymmetricTensor{2, 3}),
0.0,
zero(SymmetricTensor{2, 3}),
zero(SymmetricTensor{2, 3}),
0.0)
end
function update_state!(state::MaterialState)
state.ϵᵖ = state.temp_ϵᵖ
state.σ = state.temp_σ
state.k = state.temp_k
end;
function vonMises(σ)
s = dev(σ)
return sqrt(3.0/2.0 * s ⊡ s)
end;
function compute_stress_tangent(ϵ::SymmetricTensor{2, 3}, material::J2Plasticity, state::MaterialState)
# unpack some material parameters
G = material.G
K = material.K
H = material.H
# We use (•)ᵗ to denote *trial*-values
σᵗ = material.Dᵉ ⊡ (ϵ - state.ϵᵖ) # trial-stress
sᵗ = dev(σᵗ) # deviatoric part of trial-stress
J₂ = 0.5 * sᵗ ⊡ sᵗ # second invariant of sᵗ
σᵗₑ = sqrt(3.0*J₂) # effetive trial-stress (von Mises stress)
σʸ = material.σ₀ + H * state.k # Previous yield limit
φᵗ = σᵗₑ - σʸ # Trial-value of the yield surface
if φᵗ < 0.0 # elastic loading
state.temp_σ = σᵗ
return state.temp_σ, material.Dᵉ
else # plastic loading
h = H + 3G
μ = φᵗ / h # plastic multiplier
c1 = 1 - 3G * μ / σᵗₑ
s = c1 * sᵗ # updated deviatoric stress
σ = s + vol(σᵗ) # updated stress
# Compute algorithmic tangent stiffness ``D = \frac{\Delta \sigma }{\Delta \epsilon}``
κ = H * (state.k + μ) # drag stress
σₑ = material.σ₀ + κ # updated yield surface
δ(i,j) = i == j ? 1.0 : 0.0
Isymdev(i,j,k,l) = 0.5*(δ(i,k)*δ(j,l) + δ(i,l)*δ(j,k)) - 1.0/3.0*δ(i,j)*δ(k,l)
Q(i,j,k,l) = Isymdev(i,j,k,l) - 3.0 / (2.0*σₑ^2) * s[i,j]*s[k,l]
b = (3G*μ/σₑ) / (1.0 + 3G*μ/σₑ)
Dtemp(i,j,k,l) = -2G*b * Q(i,j,k,l) - 9G^2 / (h*σₑ^2) * s[i,j]*s[k,l]
D = material.Dᵉ + SymmetricTensor{4, 3}(Dtemp)
# Store outputs in the material state
Δϵᵖ = 3/2 *μ / σₑ*s # plastic strain
state.temp_ϵᵖ = state.ϵᵖ + Δϵᵖ # plastic strain
state.temp_k = state.k + μ # hardening variable
state.temp_σ = σ # updated stress
return state.temp_σ, D
end
end
function create_values(interpolation)
# setup quadrature rules
qr = QuadratureRule{3,RefTetrahedron}(2)
face_qr = QuadratureRule{2,RefTetrahedron}(3)
# create geometric interpolation (use the same as for u)
interpolation_geom = Lagrange{3,RefTetrahedron,1}()
# cell and facevalues for u
cellvalues_u = CellVectorValues(qr, interpolation, interpolation_geom)
facevalues_u = FaceVectorValues(face_qr, interpolation, interpolation_geom)
return cellvalues_u, facevalues_u
end;
function create_dofhandler(grid, interpolation)
dh = DofHandler(grid)
dim = 3
push!(dh, :u, dim, interpolation) # add a displacement field with 3 components
close!(dh)
return dh
end
function create_bc(dh, grid)
dbcs = ConstraintHandler(dh)
# Clamped on the left side
dofs = [1, 2, 3]
dbc = Dirichlet(:u, getfaceset(grid, "left"), (x,t) -> [0.0, 0.0, 0.0], dofs)
add!(dbcs, dbc)
close!(dbcs)
return dbcs
end;
function doassemble(cellvalues::CellVectorValues{dim},
facevalues::FaceVectorValues{dim}, K::SparseMatrixCSC, grid::Grid,
dh::DofHandler, material::J2Plasticity, u, states, t) where {dim}
r = zeros(ndofs(dh))
assembler = start_assemble(K, r)
nu = getnbasefunctions(cellvalues)
re = zeros(nu) # element residual vector
ke = zeros(nu, nu) # element tangent matrix
for (cell, state) in zip(CellIterator(dh), states)
fill!(ke, 0)
fill!(re, 0)
eldofs = celldofs(cell)
ue = u[eldofs]
assemble_cell!(ke, re, cell, cellvalues, facevalues, grid, material,
ue, state, t)
assemble!(assembler, eldofs, re, ke)
end
return K, r
end
function assemble_cell!(Ke, re, cell, cellvalues, facevalues, grid, material,
ue, state, t)
n_basefuncs = getnbasefunctions(cellvalues)
reinit!(cellvalues, cell)
for q_point in 1:getnquadpoints(cellvalues)
# For each integration point, compute stress and material stiffness
∇u = function_gradient(cellvalues, q_point, ue)
ϵ = symmetric(∇u) # Total strain
σ, D = compute_stress_tangent(ϵ, material, state[q_point])
dΩ = getdetJdV(cellvalues, q_point)
for i in 1:n_basefuncs
δϵ = symmetric(shape_gradient(cellvalues, q_point, i))
re[i] += (δϵ ⊡ σ) * dΩ # add internal force to residual
for j in 1:i
Δϵ = symmetric(shape_gradient(cellvalues, q_point, j))
Ke[i, j] += δϵ ⊡ D ⊡ Δϵ * dΩ
end
end
end
symmetrize_lower!(Ke)
# Add traction as a negative contribution to the element residual `re`:
for face in 1:nfaces(cell)
if onboundary(cell, face) && (cellid(cell), face) ∈ getfaceset(grid, "right")
reinit!(facevalues, cell, face)
for q_point in 1:getnquadpoints(facevalues)
dΓ = getdetJdV(facevalues, q_point)
for i in 1:n_basefuncs
δu = shape_value(facevalues, q_point, i)
re[i] -= (δu ⋅ t) * dΓ
end
end
end
end
end
function symmetrize_lower!(K)
for i in 1:size(K,1)
for j in i+1:size(K,1)
K[i,j] = K[j,i]
end
end
end;
function solve()
# Define material parameters
E = 200.0e9 # [Pa]
H = E/20 # [Pa]
ν = 0.3 # [-]
σ₀ = 200e6 # [Pa]
material = J2Plasticity(E, ν, σ₀, H)
L = 10.0 # beam length [m]
w = 1.0 # beam width [m]
h = 1.0 # beam height[m]
n_timesteps = 20
u_max = zeros(n_timesteps)
traction_magnitude = 1.e7 * range(0.5, 1.0, length=n_timesteps)
# Create geometry, dofs and boundary conditions
n = 2
nels = (10n, n, 2n) # number of elements in each spatial direction
P1 = Vec((0.0, 0.0, 0.0)) # start point for geometry
P2 = Vec((L, w, h)) # end point for geometry
grid = generate_grid(Tetrahedron, nels, P1, P2)
interpolation = Lagrange{3, RefTetrahedron, 1}() # Linear tet with 3 unknowns/node
dh = create_dofhandler(grid, interpolation) # JuaFEM helper function
dbcs = create_bc(dh, grid) # create Dirichlet boundary-conditions
cellvalues, facevalues = create_values(interpolation)
# Pre-allocate solution vectors, etc.
n_dofs = ndofs(dh) # total number of dofs
u = zeros(n_dofs) # solution vector
u_history = Vector{Vector{Float64}}()
Δu = zeros(n_dofs) # displacement correction
r = zeros(n_dofs) # residual
K = create_sparsity_pattern(dh); # tangent stiffness matrix
# Create material states. One array for each cell, where each element is an array of material-
# states - one for each integration point
nqp = getnquadpoints(cellvalues)
states = [[MaterialState() for _ in 1:nqp] for _ in 1:getncells(grid)]
# states = [MaterialState() for _ in 1:nqp for _ in 1:getncells(grid)]
# temp_states = [MaterialState() for _ in 1:nqp for _ in 1:getncells(grid)]
# Newton-Raphson loop
NEWTON_TOL = 1 # 1 N
for timestep in 1:n_timesteps
t = timestep # actual time (used for evaluating d-bndc)
traction = Vec((0.0, 0.0, traction_magnitude[timestep]))
newton_itr = -1
update!(dbcs, t) # evaluates the D-bndc at time t
apply!(u, dbcs) # set the prescribed values in the solution vector
while true; newton_itr += 1
if newton_itr > 8
error("Reached maximum Newton iterations, aborting")
break
end
K, r = doassemble(cellvalues, facevalues, K, grid, dh, material, u,
states, traction);
norm_r = norm(r[Ferrite.free_dofs(dbcs)])
if norm_r < NEWTON_TOL
break
end
apply_zero!(K, r, dbcs)
Δu = Symmetric(K) \ r
u -= Δu
end
push!(u_history,u)
# Update all the material states after we have reached equilibrium
for cell_states in states
foreach(update_state!, cell_states)
end
u_max[timestep] = max(abs.(u)...) # maximum displacement in current timestep
end
# ## Postprocessing
# Only a vtu-file corrsponding to the last time-step is exported.
#
# The following is a quick (and dirty) way of extracting average cell data for export.
mises_values = zeros(getncells(grid))
κ_values = zeros(getncells(grid))
for (el, cell_states) in enumerate(states)
for state in cell_states
mises_values[el] += vonMises(state.σ)
κ_values[el] += state.k*material.H
end
mises_values[el] /= length(cell_states) # average von Mises stress
κ_values[el] /= length(cell_states) # average drag stress
end
return u, dh, u_history
end
u, dh, u_history = solve();
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | code | 10386 | using Ferrite, SparseArrays, LinearAlgebra, FerriteViz
struct J2Plasticity{T, S <: SymmetricTensor{4, 3, T}}
G::T # Shear modulus
K::T # Bulk modulus
σ₀::T # Initial yield limit
H::T # Hardening modulus
Dᵉ::S # Elastic stiffness tensor
end;
function J2Plasticity(E, ν, σ₀, H)
δ(i,j) = i == j ? 1.0 : 0.0 # helper function
G = E / 2(1 + ν)
K = E / 3(1 - 2ν)
Isymdev(i,j,k,l) = 0.5*(δ(i,k)*δ(j,l) + δ(i,l)*δ(j,k)) - 1.0/3.0*δ(i,j)*δ(k,l)
temp(i,j,k,l) = 2.0G *( 0.5*(δ(i,k)*δ(j,l) + δ(i,l)*δ(j,k)) + ν/(1.0-2.0ν)*δ(i,j)*δ(k,l))
Dᵉ = SymmetricTensor{4, 3}(temp)
return J2Plasticity(G, K, σ₀, H, Dᵉ)
end;
mutable struct MaterialState{T, S <: SecondOrderTensor{3, T}}
# Store "converged" values
ϵᵖ::S # plastic strain
σ::S # stress
k::T # hardening variable
# Store temporary values used during equilibrium iterations
temp_ϵᵖ::S
temp_σ::S
temp_k::T
end
function MaterialState()
return MaterialState(
zero(SymmetricTensor{2, 3}),
zero(SymmetricTensor{2, 3}),
0.0,
zero(SymmetricTensor{2, 3}),
zero(SymmetricTensor{2, 3}),
0.0)
end
function update_state!(state::MaterialState)
state.ϵᵖ = state.temp_ϵᵖ
state.σ = state.temp_σ
state.k = state.temp_k
end;
function vonMises(σ)
s = dev(σ)
return sqrt(3.0/2.0 * s ⊡ s)
end;
function compute_stress_tangent(ϵ::SymmetricTensor{2, 3}, material::J2Plasticity, state::MaterialState)
# unpack some material parameters
G = material.G
K = material.K
H = material.H
# We use (•)ᵗ to denote *trial*-values
σᵗ = material.Dᵉ ⊡ (ϵ - state.ϵᵖ) # trial-stress
sᵗ = dev(σᵗ) # deviatoric part of trial-stress
J₂ = 0.5 * sᵗ ⊡ sᵗ # second invariant of sᵗ
σᵗₑ = sqrt(3.0*J₂) # effetive trial-stress (von Mises stress)
σʸ = material.σ₀ + H * state.k # Previous yield limit
φᵗ = σᵗₑ - σʸ # Trial-value of the yield surface
if φᵗ < 0.0 # elastic loading
state.temp_σ = σᵗ
return state.temp_σ, material.Dᵉ
else # plastic loading
h = H + 3G
μ = φᵗ / h # plastic multiplier
c1 = 1 - 3G * μ / σᵗₑ
s = c1 * sᵗ # updated deviatoric stress
σ = s + vol(σᵗ) # updated stress
# Compute algorithmic tangent stiffness ``D = \frac{\Delta \sigma }{\Delta \epsilon}``
κ = H * (state.k + μ) # drag stress
σₑ = material.σ₀ + κ # updated yield surface
δ(i,j) = i == j ? 1.0 : 0.0
Isymdev(i,j,k,l) = 0.5*(δ(i,k)*δ(j,l) + δ(i,l)*δ(j,k)) - 1.0/3.0*δ(i,j)*δ(k,l)
Q(i,j,k,l) = Isymdev(i,j,k,l) - 3.0 / (2.0*σₑ^2) * s[i,j]*s[k,l]
b = (3G*μ/σₑ) / (1.0 + 3G*μ/σₑ)
Dtemp(i,j,k,l) = -2G*b * Q(i,j,k,l) - 9G^2 / (h*σₑ^2) * s[i,j]*s[k,l]
D = material.Dᵉ + SymmetricTensor{4, 3}(Dtemp)
# Store outputs in the material state
Δϵᵖ = 3/2 *μ / σₑ*s # plastic strain
state.temp_ϵᵖ = state.ϵᵖ + Δϵᵖ # plastic strain
state.temp_k = state.k + μ # hardening variable
state.temp_σ = σ # updated stress
return state.temp_σ, D
end
end
function create_values(interpolation)
# setup quadrature rules
qr = QuadratureRule{3,RefTetrahedron}(2)
face_qr = QuadratureRule{2,RefTetrahedron}(3)
# create geometric interpolation (use the same as for u)
interpolation_geom = Lagrange{3,RefTetrahedron,1}()
# cell and facevalues for u
cellvalues_u = CellVectorValues(qr, interpolation, interpolation_geom)
facevalues_u = FaceVectorValues(face_qr, interpolation, interpolation_geom)
return cellvalues_u, facevalues_u
end;
function create_dofhandler(grid, interpolation)
dh = DofHandler(grid)
dim = 3
push!(dh, :u, dim, interpolation) # add a displacement field with 3 components
close!(dh)
return dh
end
function create_bc(dh, grid)
dbcs = ConstraintHandler(dh)
# Clamped on the left side
dofs = [1, 2, 3]
dbc = Dirichlet(:u, getfaceset(grid, "left"), (x,t) -> [0.0, 0.0, 0.0], dofs)
add!(dbcs, dbc)
close!(dbcs)
return dbcs
end;
function doassemble(cellvalues::CellVectorValues{dim},
facevalues::FaceVectorValues{dim}, K::SparseMatrixCSC, grid::Grid,
dh::DofHandler, material::J2Plasticity, u, states, t) where {dim}
r = zeros(ndofs(dh))
assembler = start_assemble(K, r)
nu = getnbasefunctions(cellvalues)
re = zeros(nu) # element residual vector
ke = zeros(nu, nu) # element tangent matrix
for (cell, state) in zip(CellIterator(dh), states)
fill!(ke, 0)
fill!(re, 0)
eldofs = celldofs(cell)
ue = u[eldofs]
assemble_cell!(ke, re, cell, cellvalues, facevalues, grid, material,
ue, state, t)
assemble!(assembler, eldofs, re, ke)
end
return K, r
end
function assemble_cell!(Ke, re, cell, cellvalues, facevalues, grid, material,
ue, state, t)
n_basefuncs = getnbasefunctions(cellvalues)
reinit!(cellvalues, cell)
for q_point in 1:getnquadpoints(cellvalues)
# For each integration point, compute stress and material stiffness
∇u = function_gradient(cellvalues, q_point, ue)
ϵ = symmetric(∇u) # Total strain
σ, D = compute_stress_tangent(ϵ, material, state[q_point])
dΩ = getdetJdV(cellvalues, q_point)
for i in 1:n_basefuncs
δϵ = symmetric(shape_gradient(cellvalues, q_point, i))
re[i] += (δϵ ⊡ σ) * dΩ # add internal force to residual
for j in 1:i
Δϵ = symmetric(shape_gradient(cellvalues, q_point, j))
Ke[i, j] += δϵ ⊡ D ⊡ Δϵ * dΩ
end
end
end
symmetrize_lower!(Ke)
# Add traction as a negative contribution to the element residual `re`:
for face in 1:nfaces(cell)
if onboundary(cell, face) && (cellid(cell), face) ∈ getfaceset(grid, "right")
reinit!(facevalues, cell, face)
for q_point in 1:getnquadpoints(facevalues)
dΓ = getdetJdV(facevalues, q_point)
for i in 1:n_basefuncs
δu = shape_value(facevalues, q_point, i)
re[i] -= (δu ⋅ t) * dΓ
end
end
end
end
end
function symmetrize_lower!(K)
for i in 1:size(K,1)
for j in i+1:size(K,1)
K[i,j] = K[j,i]
end
end
end;
function solve(liveplotting=false)
# Define material parameters
E = 200.0e9 # [Pa]
H = E/20 # [Pa]
ν = 0.3 # [-]
σ₀ = 200e6 # [Pa]
material = J2Plasticity(E, ν, σ₀, H)
L = 10.0 # beam length [m]
w = 1.0 # beam width [m]
h = 1.0 # beam height[m]
n_timesteps = 100
u_max = zeros(n_timesteps)
traction_magnitude = 1.e7 * range(0.5, 1.0, length=n_timesteps)
# Create geometry, dofs and boundary conditions
n = 2
nels = (10n, n, 2n) # number of elements in each spatial direction
P1 = Vec((0.0, 0.0, 0.0)) # start point for geometry
P2 = Vec((L, w, h)) # end point for geometry
grid = generate_grid(Tetrahedron, nels, P1, P2)
interpolation = Lagrange{3, RefTetrahedron, 1}() # Linear tet with 3 unknowns/node
dh = create_dofhandler(grid, interpolation) # JuaFEM helper function
dbcs = create_bc(dh, grid) # create Dirichlet boundary-conditions
cellvalues, facevalues = create_values(interpolation)
# Pre-allocate solution vectors, etc.
n_dofs = ndofs(dh) # total number of dofs
u = zeros(n_dofs) # solution vector
u_history = Vector{Vector{Float64}}()
if liveplotting
plotter = MakiePlotter(dh,u)
fig = ferriteviewer(plotter)
display(fig)
end
Δu = zeros(n_dofs) # displacement correction
r = zeros(n_dofs) # residual
K = create_sparsity_pattern(dh); # tangent stiffness matrix
# Create material states. One array for each cell, where each element is an array of material-
# states - one for each integration point
nqp = getnquadpoints(cellvalues)
states = [[MaterialState() for _ in 1:nqp] for _ in 1:getncells(grid)]
# states = [MaterialState() for _ in 1:nqp for _ in 1:getncells(grid)]
# temp_states = [MaterialState() for _ in 1:nqp for _ in 1:getncells(grid)]
# Newton-Raphson loop
NEWTON_TOL = 1 # 1 N
for timestep in 1:n_timesteps
t = timestep # actual time (used for evaluating d-bndc)
traction = Vec((0.0, 0.0, traction_magnitude[timestep]))
newton_itr = -1
update!(dbcs, t) # evaluates the D-bndc at time t
apply!(u, dbcs) # set the prescribed values in the solution vector
while true; newton_itr += 1
if newton_itr > 8
error("Reached maximum Newton iterations, aborting")
break
end
K, r = doassemble(cellvalues, facevalues, K, grid, dh, material, u,
states, traction);
norm_r = norm(r[Ferrite.free_dofs(dbcs)])
if norm_r < NEWTON_TOL
break
end
apply_zero!(K, r, dbcs)
Δu = Symmetric(K) \ r
u -= Δu
end
if liveplotting
FerriteViz.update!(plotter,u)
sleep(0.1)
end
push!(u_history,u)
# Update all the material states after we have reached equilibrium
for cell_states in states
foreach(update_state!, cell_states)
end
u_max[timestep] = max(abs.(u)...) # maximum displacement in current timestep
end
# ## Postprocessing
# Only a vtu-file corrsponding to the last time-step is exported.
#
# The following is a quick (and dirty) way of extracting average cell data for export.
mises_values = zeros(getncells(grid))
κ_values = zeros(getncells(grid))
for (el, cell_states) in enumerate(states)
for state in cell_states
mises_values[el] += vonMises(state.σ)
κ_values[el] += state.k*material.H
end
mises_values[el] /= length(cell_states) # average von Mises stress
κ_values[el] /= length(cell_states) # average drag stress
end
return u, dh, u_history, mises_values, κ_values
end
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | code | 310 | module FerriteViz
using Makie
using Tensors
import Ferrite
import GeometryBasics
import ShaderAbstractions
import LinearAlgebra
abstract type AbstractPlotter end
include("utils.jl")
include("makieplotting.jl")
include("lor_tools.jl")
export MakiePlotter
export ferriteviewer
export for_discretization
end
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | code | 7165 | # These functions generate the corresponding first order cells of an interpolation.
# Triangle
for_nodes(::Union{Ferrite.Lagrange{2,Ferrite.RefTetrahedron,1},Ferrite.DiscontinuousLagrange{2,Ferrite.RefTetrahedron,1},Ferrite.Triangle}) = (
(3,1,2),
)
# Quadratic Triangle
for_nodes(::Union{Ferrite.Lagrange{2,Ferrite.RefTetrahedron,2},Ferrite.DiscontinuousLagrange{2,Ferrite.RefTetrahedron,2},Ferrite.QuadraticTriangle}) = (
(6,1,4),
(5,6,4),
(3,6,5),
(5,4,2),
)
# Cubic Triangle
for_nodes(::Union{Ferrite.Lagrange{2,Ferrite.RefTetrahedron,3},Ferrite.DiscontinuousLagrange{2,Ferrite.RefTetrahedron,3},Ferrite.Cell{2,10,3}}) = (
(3,8,7),
(7,8,10),
(8,9,10),
(10,9,4),
(9,1,4),
(7,10,6),
(6,10,5),
(6,5,2),
(10,4,5),
)
# Biquadratic Triangle
for_nodes(::Union{Ferrite.Lagrange{2,Ferrite.RefTetrahedron,4},Ferrite.DiscontinuousLagrange{2,Ferrite.RefTetrahedron,4},Ferrite.Cell{2,15,3}}) = (
(3,10,9),
(13,9,10),
(10,11,13),
(14,13,11),
(11,12,14),
(4,14,12),
(12,1,4),
(9,13,8),
(15,8,13),
(13,14,15),
(5,15,14),
(14,4,5),
(8,15,7),
(6,7,15),
(15,5,6),
(7,6,2),
)
# Quintic Triangle
for_nodes(::Union{Ferrite.Lagrange{2,Ferrite.RefTetrahedron,5},Ferrite.DiscontinuousLagrange{2,Ferrite.RefTetrahedron,5},Ferrite.Cell{2,20,3}}) = (
(3,12,11),
(16,11,12),
(12,13,16),
(17,16,13),
(13,14,17),
(18,17,14),
(14,15,18),
(4,18,15),
(15,1,4),
(11,16,10),
(19,10,16),
(16,17,19),
(20,19,17),
(17,18,20),
(5,20,18),
(18,4,5),
(10,19,9),
(21,9,19),
(19,20,21),
(6,21,20),
(20,5,6),
(9,21,8),
(7,8,21),
(21,6,7),
(8,7,2),
)
# Tetrahedron
for_nodes(::Union{Ferrite.Lagrange{3,Ferrite.RefTetrahedron,1},Ferrite.DiscontinuousLagrange{3,Ferrite.RefTetrahedron,1},Ferrite.Tetrahedron}) = (
(1,2,3,4),
)
# Quadratic Tetrahedron
for_nodes(::Union{Ferrite.Lagrange{3,Ferrite.RefTetrahedron,2},Ferrite.DiscontinuousLagrange{3,Ferrite.RefTetrahedron,2},Ferrite.QuadraticTetrahedron}) = (
(5,2,6,9),
(7,6,3,10),
(8,9,10,4),
(8,5,6,9),
(8,6,7,10),
(5,8,1,6),
(7,6,1,8),
(9,10,8,6),
)
# Quadrilateral
for_nodes(::Union{Ferrite.Lagrange{2,Ferrite.RefCube,1},Ferrite.DiscontinuousLagrange{2,Ferrite.RefCube,1},Ferrite.Quadrilateral}) = (
(1,2,3,4),
)
# Quadratic Quadrilateral
for_nodes(::Union{Ferrite.Lagrange{2,Ferrite.RefCube,2},Ferrite.DiscontinuousLagrange{2,Ferrite.RefCube,2},Ferrite.QuadraticQuadrilateral}) = (
(1,5,9,8),
(5,2,6,9),
(9,6,3,7),
(8,9,7,4),
)
# Hexahedron
for_nodes(::Union{Ferrite.Lagrange{3,Ferrite.RefCube,1},Ferrite.DiscontinuousLagrange{3,Ferrite.RefCube,1},Ferrite.Hexahedron}) = (
(1,2,3,4,5,6,7,8),
)
# Quadratic Hexahedron
for_nodes(::Union{Ferrite.Lagrange{3,Ferrite.RefCube,2},Ferrite.DiscontinuousLagrange{3,Ferrite.RefCube,2},Ferrite.Cell{3,27,6}}) = (
(1,9,21,12,17,22,27,25),
(17,22,27,25,5,13,26,16),
(9,2,10,21,22,18,23,27),
(22,18,23,27,13,6,14,26),
(12,21,11,4,25,27,24,20),
(25,27,24,20,16,26,15,8),
(21,10,3,11,27,23,19,24),
(27,23,19,24,26,14,7,15)
)
"""
Get the interpolation of the first order refinement.
"""
for_interpolation(ip::Ferrite.Lagrange{dim,shape,order}) where {dim,shape,order} = Ferrite.Lagrange{dim,shape,1}()
for_base_geometry_type(ip::Ferrite.Lagrange{2,Ferrite.RefCube,order}) where {order} = Ferrite.Quadrilateral
for_base_geometry_type(ip::Ferrite.Lagrange{3,Ferrite.RefCube,order}) where {order} = Ferrite.Hexahedron
for_base_geometry_type(ip::Ferrite.Lagrange{2,Ferrite.RefTetrahedron,order}) where {order} = Ferrite.Triangle
for_base_geometry_type(ip::Ferrite.Lagrange{3,Ferrite.RefTetrahedron,order}) where {order} = Ferrite.Tetrahedron
# TODO move into ferrite core
function Ferrite.field_offset(dh::Ferrite.DofHandler, field_name::Int)
offset = 0
for i in 1:field_name-1
offset += Ferrite.getnbasefunctions(dh.field_interpolations[i])::Int * dh.field_dims[i]
end
return offset
end
# TODO move into ferrite core
function Ferrite.dof_range(dh::Ferrite.DofHandler, field_idx::Int)
offset = Ferrite.field_offset(dh, field_idx)
n_field_dofs = Ferrite.getnbasefunctions(dh.field_interpolations[field_idx])::Int * dh.field_dims[field_idx]
return (offset+1):(offset+n_field_dofs)
end
# TODO move to ferrite core
getfieldname(dh, field_idx) = dh.field_names[field_idx]
"""
Create a first order discretization w.r.t. a field and transfer
the solution.
"""
function for_discretization(dh, u)
field_idx=1
# Some helpers
ip = Ferrite.getfieldinterpolation(dh, field_idx)
field_dim = Ferrite.getfielddim(dh, field_idx)
spatial_dim = Ferrite.getdim(dh.grid)
# TODO Dofs for fields are not continuous. Think harder.
@assert Ferrite.nfields(dh) == 1 "Multiple fields not supported yet"
# # Get dof range, the hard way
# dof_min = dh.ndofs.x
# dof_max = 0
ncells = 0
for cell ∈ Ferrite.CellIterator(dh)
# celldofs = Ferrite.celldofs(cell)
# dof_max = max(dof_max, maximum(celldofs))
# dof_min = min(dof_min, minimum(celldofs))
ncells += length(for_nodes(ip))
end
# Preallocate
nodes = Vector{typeof(dh.grid.nodes[1])}(undef, Ferrite.ndofs(dh)) #(dof_max-dof_min+1)÷field_dim)
cells = Vector{Ferrite.getcelltype(dh.grid)}(undef, ncells)
ref_coords = Ferrite.reference_coordinates(ip)
# Starting here we assume a single type of cell being present
# TODO improve this.
ip_geo = Ferrite.default_interpolation(typeof(dh.grid.cells[1]))
nodes_per_cell = length(ref_coords)
qr = Ferrite.QuadratureRule{spatial_dim, Ferrite.getrefshape(ip)}(zeros(nodes_per_cell), ref_coords)
cv = Ferrite.CellScalarValues(qr, ip, ip_geo)
cellidx = 1
for cell ∈ Ferrite.CellIterator(dh)
Ferrite.reinit!(cv, cell)
coords = Ferrite.getcoordinates(cell)
dofs_f = Ferrite.celldofs(cell)[Ferrite.dof_range(dh, field_idx)]
# Extract coordinates
for q ∈ 1:nodes_per_cell
nodes[dofs_f[q]] = Ferrite.Node(Ferrite.spatial_coordinate(cv, q, coords))
end
# And the
for subcellnodes ∈ for_nodes(ip)
# Splatting sorcery to extract the global node indices.
cells[cellidx] = for_base_geometry_type(ip)(((dofs_f[[subcellnodes...]]...,)))
cellidx += 1
end
end
# Generate a new dof handler.
grid_new = Ferrite.Grid(cells, nodes)
dh_new = Ferrite.DofHandler(grid_new)
Ferrite.push!(dh_new, getfieldname(dh, field_idx), Ferrite.getfielddim(dh, field_idx), for_interpolation(ip))
Ferrite.close!(dh_new);
# Transfer solution the dumb way.
# TODO this can be optimized.
u_new = zeros(Ferrite.ndofs(dh_new))
for cell_idx ∈ 1:length(dh_new.grid.cells)
dh_dof_range = dh_new.cell_dofs_offset[cell_idx]:(dh_new.cell_dofs_offset[cell_idx+1]-1)
dofs = dh_new.cell_dofs[dh_dof_range][Ferrite.dof_range(dh_new, field_idx)]
u_new[dofs] .= u[[dh_new.grid.cells[cell_idx].nodes...]]
end
return dh_new, u_new
end
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | code | 26460 | """
FerriteViz.update!(plotter::MakiePlotter, u::Vector)
Updates the Observable `plotter.u` and thereby, triggers the plot to update.
"""
function update!(plotter::MakiePlotter, u::Vector)
@assert length(plotter.u[]) == length(u)
plotter.u[] .= u
Makie.notify(plotter.u)
end
"""
solutionplot(plotter::MakiePlotter; kwargs...)
solutionplot(dh::AbstractDofHandler, u::Vector; kwargs...)
solutionplot!(plotter::MakiePlotter; kwargs...)
solutionplot!(dh::AbstractDofHandler, u::Vector; kwargs...)
Solutionplot produces the classical contour plot onto the finite element mesh. Most important
keyword arguments are:
- `field::Symbol=:default` representing the field which gets plotted, defaults to the first field in the `dh`.
- `deformation_field::Symbol=:default` field that transforms the mesh by the given deformation, defaults to no deformation
- `process::Function=postprocess` function to construct nodal scalar values from a vector valued problem
- `colormap::Symbol=:cividis`
- `deformation_scale=1.0`
- `shading=false`
- `scale_plot=false`
- `transparent=false`
"""
@recipe(SolutionPlot) do scene
Attributes(
scale_plot=false,
shading=false,
field=:default,
deformation_field=:default,
process=postprocess,
colormap=:cividis,
colorrange=(0,1),
transparent=false,
deformation_scale = 1.0,
)
end
function Makie.plot!(SP::SolutionPlot{<:Tuple{<:MakiePlotter}})
plotter = SP[1][]
solution = @lift begin
if $(SP[:field]) == :default
field_name = Ferrite.getfieldnames(plotter.dh)[1]
reshape(transfer_solution(plotter,$(plotter.u); field_name=field_name, process=$(SP[:process])), num_vertices(plotter))
else
reshape(transfer_solution(plotter,$(plotter.u); field_name=$(SP[:field]), process=$(SP[:process])), num_vertices(plotter))
end
end
u_matrix = @lift begin
if $(SP[:deformation_field])===:default
Ferrite.getdim(plotter.dh.grid) > 2 ? Point3f[Point3f(0,0,0)] : Point2f[Point2f(0,0)]
else
#TODO remove convert
convert(Vector{Point{Ferrite.getdim(plotter.dh.grid),Float32}},Makie.to_vertices(transfer_solution(plotter,$(plotter.u); field_name=$(SP[:deformation_field]), process=identity)))
end
end
@lift begin
if $(SP[:deformation_field])===:default
plotter.physical_coords_mesh[1:end] = plotter.physical_coords
else
plotter.physical_coords_mesh[1:end] = plotter.physical_coords .+ ($(SP[:deformation_scale]) .* $(u_matrix))
end
end
mins = @lift(minimum(x->isnan(x) ? 1e10 : x, $solution))
maxs = @lift(maximum(x->isnan(x) ? -1e10 : x, $solution))
SP[:colorrange] = @lift begin
if isapprox($mins,$maxs)
if isapprox($mins,zero($mins)) && isapprox($maxs,zero($maxs))
(1e-18,2e-18)
else
($mins,1.01($maxs))
end
else
($mins,$maxs)
end
end
return Makie.mesh!(SP, plotter.mesh, color=solution, shading=SP[:shading], scale_plot=SP[:scale_plot], colormap=SP[:colormap],colorrange=SP[:colorrange] , transparent=SP[:transparent])
end
"""
cellplot(plotter::MakiePlotter,σ::Vector{T}; kwargs...) where T
cellplot!(plotter::MakiePlotter,σ::Vector{T}; kwargs...) where T
`cellplot` plots constant scalar data on the cells of the finite element mesh. If `T` is not a number, the keyword argument `process`
can be passed in order to reduce the elements of `σ` to a scalar.
keyword arguments are:
- `deformation_field::Symbol=:default` field that transforms the mesh by the given deformation, defaults to no deformation
- `process::Function=identity` function to construct cell scalar values. Defaults to `identity`, i.e. scalar values.
- `colormap::Symbol=:cividis`
- `deformation_scale=1.0`
- `shading=false`
- `scale_plot=false`
- `transparent=false`
"""
@recipe(CellPlot) do scene
Attributes(
scale_plot=false,
shading=false,
deformation_field=:default,
process=identity,
colormap=:cividis,
colorrange=(0,1),
transparent=false,
deformation_scale = 1.0,
)
end
function Makie.plot!(CP::CellPlot{<:Tuple{<:MakiePlotter{dim},Vector}}) where dim
plotter = CP[1][]
qp_values = CP[2][]
u_matrix = @lift begin
if $(CP[:deformation_field])===:default
Point3f[Point3f(0,0,0)]
else
convert(Vector{Point{Ferrite.getdim(plotter.dh.grid),Float32}},Makie.to_vertices(transfer_solution(plotter,$(plotter.u); field_name=$(CP[:deformation_field]), process=identity)))
end
end
coords = @lift begin
if $(CP[:deformation_field])===:default
plotter.physical_coords_mesh[1:end] = plotter.physical_coords
else
plotter.physical_coords_mesh[1:end] = plotter.physical_coords .+ ($(CP[:deformation_scale]) .* $(u_matrix))
end
end
mins = minimum(qp_values)
maxs = maximum(qp_values)
CP[:colorrange] = @lift(isapprox($mins,$maxs) ? ($mins,1.01($maxs)) : ($mins,$maxs))
solution = @lift(reshape(transfer_scalar_celldata(plotter, qp_values; process=$(CP[:process])), num_vertices(plotter)))
return Makie.mesh!(CP, plotter.mesh, color=solution, shading=CP[:shading], scale_plot=CP[:scale_plot], colormap=CP[:colormap], transparent=CP[:transparent])
end
"""
wireframe(plotter::MakiePlotter; kwargs...)
wireframe(dh::AbstractDofHandler, u::Vector; kwargs...)
wireframe(grid::AbstractGrid; kwargs...)
wireframe!(plotter::MakiePlotter; kwargs...)
wireframe!(dh::AbstractDofHandler, u::Vector; kwargs...)
wireframe!(grid::AbstractGrid; kwargs...)
Plots the finite element mesh, optionally labels it and transforms it if a suitable `deformation_field` is given.
- `plotnodes::Bool=true` plots the nodes as circles/spheres
- `strokewidth::Int=2` how thick faces/edges are drawn
- `color::Symbol=theme(scene,:linecolor)` color of the faces/edges and nodes
- `markersize::Int=30` size of the nodes
- `deformation_field::Symbol=:default` field that transforms the mesh by the given deformation, defaults to no deformation
- `deformation_scale::Number=1.0` scaling of the deformation
- `cellsets=false` Color cells based on their cellset association. If no cellset is found for a cell, the cell is marked blue.
- `nodelables=false` global node id labels
- `nodelabelcolor=:darkblue`
- `celllabels=false` global cell id labels
- `celllabelcolor=:darkred`
- `textsize::Int=15` size of the label's text
- `visible=true`
"""
@recipe(Wireframe) do scene
Attributes(
plotnodes=true,
color=theme(scene, :linecolor),
strokewidth=theme(scene, :linewidth),
markersize=theme(scene, :markersize),
deformation_field=:default,
visible=true,
deformation_scale=1,
textsize=15,
offset=(0.0,0.0),
nodelabels=false,
nodelabelcolor=:darkblue,
celllabels=false,
celllabelcolor=:darkred,
cellsets=false,
depth_shift=-0.0001f0
)
end
function Makie.plot!(WF::Wireframe{<:Tuple{<:MakiePlotter{dim}}}) where dim
plotter = WF[1][]
#triangle representation
# can't use triangle representation, since we don't know by this information which edges to draw
# further it would be in the incompressible example 2600 nodes vs 15000 in triangle representation
# u_matrix = @lift($(WF[:deformation_field])===:default ? zeros(0,3) : transfer_solution(plotter; field_idx=Ferrite.find_field(plotter.dh,$(WF[:deformation_field])), process=identity))
# coords = @lift($(WF[:deformation_field])===:default ? plotter.physical_coords : plotter.physical_coords .+ ($(WF[:scale]) .* $(u_matrix)))
#original representation
pointtype = dim > 2 ? Point3f : Point2f
nodal_u_matrix = @lift begin
if $(WF[:deformation_field])===:default
pointtype[zero(pointtype)]
else
convert(Vector{Point{Ferrite.getdim(plotter.dh.grid),Float32}},Makie.to_vertices(dof_to_node(plotter.dh, $(WF[1][].u); field_name=$(WF[:deformation_field]))))
end
end
gridnodes = @lift begin
if $(WF[:deformation_field])===:default
plotter.gridnodes
else
plotter.gridnodes .+ ($(WF[:deformation_scale]) .* $(nodal_u_matrix))
end
end
lines = @lift begin
dim > 2 ? (lines = Point3f[]) : (lines = Point2f[])
for cell in Ferrite.getcells(plotter.dh.grid)
boundaryentities = dim < 3 ? Ferrite.faces(cell) : Ferrite.edges(cell)
append!(lines, [$gridnodes[e] for boundary in boundaryentities for e in boundary])
end
lines
end
nodes = @lift($(WF[:plotnodes]) ? $(gridnodes) : pointtype[zero(pointtype)])
#plot cellsets
cellsets = plotter.dh.grid.cellsets
cellset_to_value = Dict{String,Int}()
for (cellsetidx,(cellsetname,cellset)) in enumerate(cellsets)
cellset_to_value[cellsetname] = cellsetidx
end
cellset_u = zeros(Ferrite.getncells(plotter.dh.grid))
for (cellidx,cell) in enumerate(Ferrite.getcells(plotter.dh.grid))
for (cellsetname,cellsetvalue) in cellset_to_value
if cellidx in cellsets[cellsetname]
cellset_u[cellidx] = cellsetvalue
end
end
end
u_matrix = @lift begin
if $(WF[:deformation_field])===:default
pointtype[zero(pointtype)]
else
Makie.to_ndim.(pointtype, Makie.to_vertices(transfer_solution(plotter,$(plotter.u); field_name=$(WF[:deformation_field]), process=identity)), 0f0)
end
end
coords = @lift begin
if $(WF[:deformation_field])===:default
plotter.physical_coords_mesh[1:end] = plotter.physical_coords
else
plotter.physical_coords_mesh[1:end] = plotter.physical_coords .+ ($(WF[:deformation_scale]) .* $(u_matrix))
end
end
colorrange = isempty(cellset_to_value) ? (0,1) : (0,maximum(values(cellset_to_value)))
cellset_u = reshape(transfer_scalar_celldata(plotter, cellset_u; process=identity), num_vertices(plotter))
Makie.mesh!(WF, plotter.mesh, color=cellset_u, shading=false, scale_plot=false, colormap=:darktest, visible=WF[:cellsets])
#plot the nodes
shouldplot = @lift ($(WF[:visible]) && $(WF[:plotnodes]))
Makie.scatter!(WF,gridnodes,markersize=WF[:markersize], color=WF[:color], visible=shouldplot)
#set up nodelabels
nodelabels = @lift $(WF[:nodelabels]) ? ["$i" for i in 1:size($gridnodes,1)] : [""]
nodepositions = @lift $(WF[:nodelabels]) ? $gridnodes : (dim < 3 ? Point2f[Point2f((0,0))] : Point3f[Point3f((0,0,0))])
#set up celllabels
celllabels = @lift $(WF[:celllabels]) ? ["$i" for i in 1:Ferrite.getncells(plotter.dh.grid)] : [""]
cellpositions = @lift $(WF[:celllabels]) ? [midpoint(cell,$gridnodes) for cell in Ferrite.getcells(plotter.dh.grid)] : (dim < 3 ? [Point2f((0,0))] : [Point3f((0,0,0))])
Makie.text!(WF,nodepositions, text=nodelabels, textsize=WF[:textsize], offset=WF[:offset],color=WF[:nodelabelcolor])
Makie.text!(WF,celllabels, position=cellpositions, textsize=WF[:textsize], color=WF[:celllabelcolor], align=(:center,:center))
#plot edges (3D) /faces (2D) of the mesh
Makie.linesegments!(WF,lines,color=WF[:color], linewidth=WF[:strokewidth], visible=WF[:visible], depth_shift=WF[:depth_shift])
end
function Makie.plot!(WF::Wireframe{<:Tuple{<:Ferrite.AbstractGrid{dim}}}) where dim
grid = WF[1][]
coords = [Ferrite.getcoordinates(node)[i] for node in Ferrite.getnodes(grid), i in 1:dim]
coords = Makie.to_vertices(coords)
dim > 2 ? (lines = Point3f[]) : (lines = Point2f[])
for cell in Ferrite.getcells(grid)
boundaryentities = dim < 3 ? Ferrite.faces(cell) : Ferrite.edges(cell)
append!(lines, [coords[e] for boundary in boundaryentities for e in boundary])
end
nodes = @lift($(WF[:plotnodes]) ? coords : Point3f[Point3f(0,0,0)])
shouldplot = @lift ($(WF[:visible]) && $(WF[:plotnodes]))
Makie.scatter!(WF,nodes,markersize=WF[:markersize], color=WF[:color], visible=shouldplot)
nodelabels = @lift $(WF[:nodelabels]) ? ["$i" for i in 1:size(coords,1)] : [""]
nodepositions = @lift $(WF[:nodelabels]) ? coords : (dim < 3 ? Point2f[Point2f((0,0))] : Point3f[Point3f((0,0,0))])
celllabels = @lift $(WF[:celllabels]) ? ["$i" for i in 1:Ferrite.getncells(grid)] : [""]
cellpositions = @lift $(WF[:celllabels]) ? [midpoint(cell,coords) for cell in Ferrite.getcells(grid)] : (dim < 3 ? [Point2f((0,0))] : [Point3f((0,0,0))])
#cellsetsplot
if isconcretetype(grid.cells)
dh = Ferrite.DofHandler(grid)
else
dh = Ferrite.MixedDofHandler(grid)
end
cellsets = grid.cellsets
cellset_to_value = Dict{String,Int}()
for (cellsetidx,(cellsetname,cellset)) in enumerate(cellsets)
cellset_to_value[cellsetname] = cellsetidx
end
cellset_u = zeros(Ferrite.getncells(grid))
for (cellidx,cell) in enumerate(Ferrite.getcells(grid))
for (cellsetname,cellsetvalue) in cellset_to_value
if cellidx in cellsets[cellsetname]
cellset_u[cellidx] = cellsetvalue
end
end
end
plotter = MakiePlotter(dh,cellset_u)
cellset_u = reshape(transfer_scalar_celldata(plotter, cellset_u; process=identity), num_vertices(plotter))
colorrange = isempty(cellset_to_value) ? (0,1) : (0,maximum(values(cellset_to_value)))
Makie.mesh!(WF, plotter.mesh, color=cellset_u, shading=false, scale_plot=false, colormap=:darktest, visible=WF[:cellsets])
Makie.text!(WF,nodelabels, position=nodepositions, textsize=WF[:textsize], offset=WF[:offset],color=WF[:nodelabelcolor])
Makie.text!(WF,celllabels, position=cellpositions, textsize=WF[:textsize], color=WF[:celllabelcolor], align=(:center,:center))
Makie.linesegments!(WF,lines,color=WF[:color], strokewidth=WF[:strokewidth], visible=WF[:visible])
end
"""
surface(plotter::MakiePlotter; kwargs...)
surface(dh::AbstractDofHandler, u::Vector; kwargs...)
surface!(plotter::MakiePlotter; kwargs...)
surface!(dh::AbstractDofHandler, u::Vector; kwargs...)
Uses the given `field` and plots the scalar values as a surface. If it's a vector valued problem, the nodal vector
values are transformed to a scalar based on `process` which defaults to the magnitude. Only availble in `dim=2`.
- `field = :default`
- `process = postprocess`
- `scale_plot = false`
- `shading = false`
- `colormap = :cividis`
"""
@recipe(Surface) do scene
Attributes(
field = :default,
process = postprocess,
scale_plot = false,
shading = false,
colormap = :cividis,
)
end
function Makie.plot!(SF::Surface{<:Tuple{<:MakiePlotter{2}}})
plotter = SF[1][]
solution = @lift begin
if $(SF[:field]) == :default
field_name = Ferrite.getfieldnames(plotter.dh)[1]
reshape(transfer_solution(plotter,$(plotter.u); field_name=field_name, process=$(SF[:process])), num_vertices(plotter))
else
reshape(transfer_solution(plotter,$(plotter.u); field_name=$(SF[:field]), process=$(SF[:process])), num_vertices(plotter))
end
end
coords = @lift begin
Point3f[Point3f(coord[1], coord[2], $(solution)[idx]) for (idx, coord) in enumerate(plotter.physical_coords)]
end
return Makie.mesh!(SF, coords, plotter.vis_triangles, color=solution, scale_plot=SF[:scale_plot], shading=SF[:shading], colormap=SF[:colormap])
end
"""
arrows(plotter::MakiePlotter; kwargs...)
arrows(dh::AbstractDofHandler, u::Vector; kwargs...)
arrows!(plotter::MakiePlotter; kwargs...)
arrows!(dh::AbstractDofHandler, u::Vector; kwargs...)
At every node position a arrows is drawn, where the arrow tip ends at the node. Only works in `dim >=2`. If a `color` is specified
the arrows are unicolored. Otherwise the color corresponds to the magnitude, or any other scalar value based on the `process` function.
- `arrowsize = 0.08`
- `normalize = true`
- `field = :default`
- `color = :default`
- `colormap = :cividis`
- `process=postprocess`
- `lengthscale = 1f0`
"""
@recipe(Arrows) do scene
Attributes(
arrowsize = Makie.Automatic(),
normalize = true, #TODO: broken
field = :default,
color = :default,
colormap = :cividis,
process=postprocess,
lengthscale = 1f0,
)
end
function Makie.plot!(AR::Arrows{<:Tuple{<:MakiePlotter{dim}}}) where dim
plotter = AR[1][]
solution = @lift begin
if $(AR[:field]) === :default
field_name = Ferrite.getfieldnames(plotter.dh)[1]
@assert Ferrite.getfielddim(plotter.dh,field_name) > 1
transfer_solution(plotter,$(plotter.u); field_name=field_name, process=identity)
else
@assert Ferrite.getfielddim(plotter.dh,$(AR[:field])) > 1
transfer_solution(plotter,$(plotter.u); field_name=$(AR[:field]), process=identity)
end
end
if dim == 2
ns = @lift([Vec2f(i) for i in eachrow($(solution))])
lengths = @lift($(AR[:color])===:default ? $(AR[:process]).($(ns)) : ones(length($(ns)))*$(AR[:color]))
elseif dim == 3
ns = @lift([Vec3f(i) for i in eachrow($(solution))])
lengths = @lift($(AR[:color])===:default ? $(AR[:process]).($(ns)) : ones(length($(ns)))*$(AR[:color]))
else
error("Arrows plots are only available in dim ≥ 2")
end
Makie.arrows!(AR, plotter.physical_coords, ns, arrowsize=AR[:arrowsize], colormap=AR[:colormap], color=lengths, lengthscale=AR[:lengthscale])
end
"""
elementinfo(ip::Interpolation; kwargs...)
elementinfo(cell::AbstractCell; kwargs...)
elementinfo(ip::Type{Interpolation}; kwargs...)
elementinfo(cell::Type{AbstractCell}; kwargs...)
- `plotnodes=true` controls if nodes of element are plotted
- `strokewidth=2` strokwidth of faces/edges
- `color=theme(scene, :linecolor)`
- `markersize=30` size of the nodes
- `textsize=60` textsize of node-, edges- and facelabels
- `nodelabels=true` switch that controls plotting of nodelabels
- `nodelabelcolor=:darkred`
- `nodelabeloffset=(0.0,0.0)` offset of the nodelabel text relative to its associated node
- `facelabels=true` switch that controls plotting of facelabels
- `facelabelcolor=:darkgreen`
- `facelabeloffset=(-40,0)` offset of the facelabel text relative to its associated face middlepoint
- `edgelabels=true` switch that controls plotting of edgelabels
- `edgelabelcolor=:darkblue`
- `edgelabeloffset=(-40,-40)` offset of the edgelabel text relative to its associated edge middlepoint
- `font="Julia Mono"` font of the node-, edge-, and facelabels
"""
@recipe(Elementinfo) do scene
Attributes(
plotnodes=true,
strokewidth=theme(scene, :linewidth),
color=theme(scene, :linecolor),
markersize=theme(scene, :markersize),
textsize=60,
nodelabels=true,
nodelabelcolor=:darkred,
nodelabeloffset=(0.0,0.0),
facelabels=true,
facelabelcolor=:darkgreen,
facelabeloffset=(-40,0),
edgelabels=true,
edgelabelcolor=:darkblue,
edgelabeloffset=(-40,-40),
font=theme(scene,:font),
)
end
function Makie.plot!(Ele::Elementinfo{<:Tuple{<:Ferrite.Interpolation{dim,refshape}}}) where {dim,refshape}
ip = Ele[1][]
elenodes = Ferrite.reference_coordinates(ip) |> x->reshape(reinterpret(Float64,x),(dim,length(x)))'
dim > 2 ? (lines = Point3f[]) : (lines = Point2f[])
facenodes = Ferrite.faces(ip)
if dim == 2
append!(lines, [elenodes[e,:] for boundary in facenodes for e in boundary[1:2]]) # 1:2 because higher order node in the middle
else
edgenodes = Ferrite.edges(ip)
order = Ferrite.getorder(ip)
#TODO remove the index monstrosity below after edges are defined consistently see https://github.com/Ferrite-FEM/Ferrite.jl/issues/520
append!(lines, [elenodes[e,:] for boundary in edgenodes for e in boundary[1:((refshape == Ferrite.RefCube) ? 1 : (order > 1 ? 2 : 1)):((refshape == Ferrite.RefCube) ? 2 : end)]]) # 1:2 because higher order node in the middle
end
boundaryentities = dim == 2 ? facenodes : edgenodes
#plot element boundary
Makie.linesegments!(Ele,lines,color=Ele[:color], linewidth=Ele[:strokewidth])
for (id,face) in enumerate(facenodes)
idx = 0
if refshape == Ferrite.RefCube && dim == 3
idx = 4
elseif refshape == Ferrite.RefTetrahedron && dim == 3
idx = 3
else
idx = 2
end
position = zeros(dim)
for i in 1:idx
position += elenodes[face[i],:]
end
position ./= idx
position = dim == 2 ? Point2f(position) : Point3f(position)
Makie.text!(Ele,"$id", position=position, textsize=Ele[:textsize], offset=Ele[:facelabeloffset],color=Ele[:facelabelcolor],visible=Ele[:facelabels],font=Ele[:font])
end
if dim == 3
for (id,edge) in enumerate(edgenodes)
position = Point3f((elenodes[edge[1],:] + elenodes[refshape==Ferrite.RefCube ? edge[2] : edge[end],:])*0.5)
t = Makie.text!(Ele,"$id", position=position, textsize=Ele[:textsize], offset=Ele[:edgelabeloffset],color=Ele[:edgelabelcolor],visible=Ele[:edgelabels],align=(:center,:center),font=Ele[:font])
# Boundingbox can't switch currently from pixelspace to "coordinate" space in recipes
#bb = Makie.boundingbox(t)
#Makie.wireframe!(Ele,bb,space=:pixel)
end
end
#plot the nodes
Makie.scatter!(Ele,elenodes,markersize=Ele[:markersize], color=Ele[:color], visible=Ele[:plotnodes])
#set up nodelabels
nodelabels = @lift $(Ele[:nodelabels]) ? ["$i" for i in 1:size(elenodes,1)] : [""]
nodepositions = @lift $(Ele[:nodelabels]) ? [dim < 3 ? Point2f(row) : Point3f(row) for row in eachrow(elenodes)] : (dim < 3 ? [Point2f((0,0))] : [Point3f((0,0,0))])
#set up celllabels
Makie.text!(Ele,nodelabels, position=nodepositions, textsize=Ele[:textsize], offset=Ele[:nodelabeloffset],color=Ele[:nodelabelcolor],font=Ele[:font])
#plot edges (3D) /faces (2D) of the mesh
Makie.linesegments!(Ele,lines,color=Ele[:color], linewidth=Ele[:strokewidth])
end
Makie.convert_arguments(P::Type{<:Elementinfo}, cell::C) where C<:Ferrite.AbstractCell = (Ferrite.default_interpolation(typeof(cell)),)
Makie.convert_arguments(P::Type{<:Elementinfo}, celltype::Type{C}) where C<:Ferrite.AbstractCell = (Ferrite.default_interpolation(celltype),)
Makie.convert_arguments(P::Type{<:Elementinfo}, iptype::Type{IP}) where IP<:Ferrite.Interpolation = (iptype(),)
"""
ferriteviewer(plotter::MakiePlotter)
ferriteviewer(plotter::MakiePlotter, u_history::Vector{Vector{T}}})
Constructs a viewer with a `solutionplot`, `Colorbar` as well as sliders,toggles and menus to change the current view.
If the second dispatch is called a timeslider is added, in order to step through a set of solutions obtained from a simulation.
"""
function ferriteviewer(plotter::MakiePlotter{dim}) where dim
#set up figure and axis, axis either LScene in 3D or Axis if dim < 2
fig = Figure()
dim > 2 ? (ax = LScene(fig[1,1])) : (ax = Axis(fig[1,1]))
#toggles and their labels for switching plot types on/off
toggles = [Toggle(fig, active=active) for active in [true,false,false]]
labels = [Label(fig,label) for label in ["mesh", "deformation", "labels"]]
#setup the deformation_field as Observable
deformation_field = Makie.Observable(Ferrite.getfieldnames(plotter.dh)[1])
#solutionplot main plot of the viewer
solutionp = solutionplot!(plotter,colormap=:cividis,deformation_field=@lift $(toggles[2].active) ? $(deformation_field) : :default)
#setting up various sliders
markerslider = Slider(fig, range = 0:1:100, startvalue=5)
strokewidthslider = Slider(fig, range = 0:1:10, startvalue=1)
markersize = lift(x->x,markerslider.value)
strokewidth = lift(x->x,strokewidthslider.value)
#plot the fe-mesh
wireframep = wireframe!(plotter,markersize=markersize,strokewidth=strokewidth,deformation_field= @lift $(toggles[2].active) ? $(deformation_field) : :default)
#connect fe-mesh plot to the toggle
connect!(wireframep.visible,toggles[1].active)
connect!(wireframep.nodelabels,toggles[3].active)
connect!(wireframep.celllabels,toggles[3].active)
#set up dropdown menus for colormap, field, deformation field and processing function
menu_cm = Menu(fig, options=["cividis", "inferno", "thermal"], direction=:up)
menu_field = Menu(fig, options=Ferrite.getfieldnames(plotter.dh))
menu_deformation_field = Menu(fig, options=Ferrite.getfieldnames(plotter.dh))
menu_process = Menu(fig, options=[x₁,x₂,x₃,("magnitude",l2),("manhatten norm",l1),identity])
#align all menus as a vgrid under each other
fig[1,3] = vgrid!(grid!(hcat(toggles,labels), tellheight=false),
Label(fig,"nodesize",width=nothing), markerslider,
Label(fig,"strokewidth",width=nothing), strokewidthslider,
Label(fig,"processing function",width=nothing), menu_process,
Label(fig,"field",width=nothing), menu_field,
Label(fig, "deformation field",width=nothing),menu_deformation_field,
Label(fig, "colormap",width=nothing),menu_cm)
#add colorbar
cb = Colorbar(fig[1,2], solutionp)
#event handling for selecting stuff from the menus
on(menu_cm.selection) do s
cb.colormap = s
solutionp.colormap = s
end
on(menu_field.selection) do field
solutionp.field = field
end
on(menu_deformation_field.selection) do field
solutionp.deformation_field = field
wireframep.deformation_field = field
end
on(menu_process.selection) do process_function
solutionp.process=process_function
end
return fig
end
function ferriteviewer(plotter::MakiePlotter, data::Vector{Vector{T}}) where T
fig = ferriteviewer(plotter)
sg = SliderGrid(fig[2,1], (label="timestep n:", range=1:length(data), format = x->"$x"))
@lift(FerriteViz.update!(plotter,data[$(sg.sliders[1].value)]))
display(fig)
end
####### One Shot Methods #######
const FerriteVizPlots = Union{Type{<:Wireframe},Type{<:SolutionPlot},Type{<:Arrows},Type{<:Surface}}
function Makie.convert_arguments(P::FerriteVizPlots, dh::Ferrite.AbstractDofHandler, u::Vector)
return (MakiePlotter(dh,u),)
end
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | code | 34758 | # Helper... Refactoring needed.
function getfieldinterpolation(dh::Ferrite.DofHandler, field_name::Symbol)
field_idx = indexin(dh.field_names, [:b])
field_idx == nothing && error("did not find field $field_name")
dh.field_interpolations[field_idx]
end
function field_offset(dh::Ferrite.DofHandler, field_idx::Int)
offset = 0
for i in 1:(field_idx-1)
offset += Ferrite.getnbasefunctions(dh.field_interpolations[i])::Int * dh.field_dims[i]
end
return offset
end
Ferrite.vertices(cell::Ferrite.Cell{3,3,1}) = cell.nodes
Ferrite.default_interpolation(::Type{Ferrite.Cell{3,3,1}}) = Ferrite.Lagrange{2,Ferrite.RefTetrahedron,1}()
"""
linear_face_cell(cell::Ferrite.Cell, local_face_idx::Int)
Get the geometrically linear face of a given cell.
!!! warning
This may extracts the face spanned by the vertices, not the actual face!
"""
linear_face_cell(cell::Ferrite.Cell{3,N,4}, local_face_idx::Int) where N = Ferrite.Cell{3,3,1}(Ferrite.faces(cell)[local_face_idx])
linear_face_cell(cell::Ferrite.Cell{3,N,6}, local_face_idx::Int) where N = Ferrite.Quadrilateral3D(Ferrite.faces(cell)[local_face_idx])
# Obtain the face interpolation on regular geometries.
getfaceip(ip::Ferrite.Interpolation{dim, shape, order}, local_face_idx::Int) where {dim, shape <: Union{Ferrite.RefTetrahedron, Ferrite.RefCube}, order} = Ferrite.getlowerdim(ip)
struct MakiePlotter{dim,DH<:Ferrite.AbstractDofHandler,T1,TOP<:Union{Nothing,Ferrite.AbstractTopology},T2,M,TRI} <: AbstractPlotter
dh::DH
u::Makie.Observable{Vector{T1}} # original solution on the original mesh (i.e. dh.mesh)
topology::TOP
visible::Vector{Bool} #TODO change from per cell to per triangle
gridnodes::Vector{GeometryBasics.Point{dim,T2}} # coordinates of grid nodes in matrix form
physical_coords::Vector{GeometryBasics.Point{dim,T2}} #original coordinates in physical space of a vertex
physical_coords_mesh::ShaderAbstractions.Buffer{GeometryBasics.Point{dim,T2},Vector{GeometryBasics.Point{dim,T2}}} # coordinates in physical space of a vertex
all_triangles::Vector{TRI}
vis_triangles::ShaderAbstractions.Buffer{TRI,Vector{TRI}}
triangle_cell_map::Vector{Int}
cell_triangle_offsets::Vector{Int}
reference_coords::Matrix{T1} # coordinates on the associated reference cell for the corresponding triangle vertex
mesh::M
end
triangles_on_cell(plotter::MP, cell_idx::Int) where {MP <: MakiePlotter} = (plotter.cell_triangle_offsets[cell_idx]+1):plotter.cell_triangle_offsets[cell_idx+1]
"""
MakiePlotter(dh::Ferrite.AbstractDofHandler, u::Vector)
MakiePlotter(dh::Ferrite.AbstractDofHandler, u::Vector, topology::TOP) where {TOP<:Ferrite.AbstractTopology}
Builds a static triangulation of the underlying `grid` in `dh.grid` for rendering via Makie.
The triangulation acts as a "L2" triangulation, i.e. the nodes which are shared between elements in the mesh are doubled.
!!! tip
For large 3D grids, prefer to use the second constructor if you have already a `topology`.
Otherwise, it will be rebuilt which is time consuming.
"""
function MakiePlotter(dh::Ferrite.AbstractDofHandler, u::Vector, topology::TOP) where {TOP<:Union{Nothing,Ferrite.AbstractTopology}}
cells = Ferrite.getcells(dh.grid)
dim = Ferrite.getdim(dh.grid)
visible = zeros(Bool,length(cells))
if dim > 2
boundaryfaces = findall(isempty,topology.face_neighbor)
boundaryelements = Ferrite.getindex.(boundaryfaces,1)
else
boundaryelements = collect(1:Ferrite.getncells(dh.grid))
end
visible[boundaryelements] .= true
# We do not take any assumptions on the mesh, so first we have to loopkup
num_triangles = 0
cell_triangle_offsets = Vector{Int}(undef, length(cells)+1)
cell_triangle_offsets[1] = 0
for (cell_idx, cell) in enumerate(cells)
num_triangles += ntriangles(cell)
cell_triangle_offsets[cell_idx+1] = num_triangles
end
# Preallocate the matrices carrying the triangulation information
triangles = Matrix{Int}(undef, num_triangles, 3)
triangle_cell_map = Vector{Int}(undef, num_triangles)
physical_coords = Vector{GeometryBasics.Point{dim,Float32}}(undef, num_triangles*3)
gridnodes = [GeometryBasics.Point{dim,Float32}(node.data) for node in Ferrite.getcoordinates.(Ferrite.getnodes(dh.grid))]
reference_coords = Matrix{Float64}(undef, num_triangles*3,dim)
# Decompose does the heavy lifting for us
coord_offset = 1
triangle_offset = 1
for (cell_id,cell) ∈ enumerate(cells)
triangle_offset_begin = triangle_offset
(coord_offset, triangle_offset) = decompose!(coord_offset, physical_coords, reference_coords, triangle_offset, triangles, dh.grid, cell)
triangle_cell_map[triangle_offset_begin:(triangle_offset-1)] .= cell_id
end
all_triangles = Makie.to_triangles(triangles)
vis_triangles = copy(all_triangles)
n_visible = sum(visible[triangle_cell_map])
n_notvisible = length(all_triangles) - n_visible
vis_triangles[ .! visible[triangle_cell_map]] .= (GeometryBasics.GLTriangleFace(1,1,1) for i in 1:n_notvisible)
vis_triangles = ShaderAbstractions.Buffer(Makie.Observable(vis_triangles))
physical_coords_m = ShaderAbstractions.Buffer(Makie.Observable(copy(physical_coords)))
mesh = GeometryBasics.Mesh(physical_coords_m,vis_triangles)
return MakiePlotter{dim,typeof(dh),eltype(u),typeof(topology),Float32,typeof(mesh),eltype(vis_triangles)}(dh,Observable(u),topology,visible,gridnodes,physical_coords,physical_coords_m,all_triangles,vis_triangles,triangle_cell_map,cell_triangle_offsets,reference_coords,mesh)
end
MakiePlotter(dh,u) = MakiePlotter(dh,u,Ferrite.getdim(dh.grid) > 2 ? Ferrite.ExclusiveTopology(dh.grid.cells) : nothing)
"""
ClipPlane{T}(normal, distance_to_origin)
Clip plane with data of type `T` described by the normal and its distance to the coordinate origin.
!!! details
**INTERNAL**: Instances are callable as `plane(grid, cellid)` returning true if all nodes of a cell are on the side of the positive side of the plane,
i.e. where for its normal we have coord ⋅ plane.normal > plane.distance. With this helper we perform the crinkle clip internally. xref `[crinkle_clip!](@ref)`
"""
struct ClipPlane{T}
normal::Tensors.Vec{3,T}
distance::T
end
# Binary decision function to clip a cell with a plane for the crinkle clip.
function (plane::ClipPlane)(grid, cellid)
cell = grid.cells[cellid]
coords = Ferrite.getcoordinates.(Ferrite.getnodes(grid)[[cell.nodes...]])
for coord ∈ coords
if coord ⋅ plane.normal > plane.distance
return false
end
end
return true
end
"""
crinkle_clip!(plotter::MakiePlotter{3}, decision_fun)
Crinkle clip updates the visibility of the triangles, based on an
implicit description of the clipping surface. Here `decision_fun` takes the grid and
a cell index as input and returns whether the cell is visible or not.
!!! warning
Chained calls to `crinkle_clip!` won't work at the moment.
"""
function crinkle_clip!(plotter::MakiePlotter{3,DH,T}, decision_fun::DF) where {DH,T,DF}
dh = plotter.dh
u = plotter.u
grid = dh.grid
# We iterate over all triangles and check if the corresponding cell is visible.
for (cell_id, cell) ∈ enumerate(Ferrite.getcells(plotter.dh.grid))
dfun_visible = decision_fun(grid, cell_id)
if dfun_visible
cell_neighbors = Ferrite.getneighborhood(plotter.topology, grid, Ferrite.CellIndex(cell_id))
plotter.visible[cell_id] = !all(decision_fun.((grid,),cell_neighbors)) || plotter.visible[cell_id]
else
plotter.visible[cell_id] = false
end
end
plotter.vis_triangles[plotter.visible[plotter.triangle_cell_map]] = plotter.all_triangles[plotter.visible[plotter.triangle_cell_map]]
plotter.vis_triangles[ .! plotter.visible[plotter.triangle_cell_map]] = [GeometryBasics.GLTriangleFace(1,1,1) for i in 1:sum(.! plotter.visible[plotter.triangle_cell_map])]
nothing
end
"""
crinkle_clip(plotter::MakiePlotter{3}, decision_fun) -> MakiePlotter
Crinkle clip generates a new plotter with updated visibility of the triangles.
Non-mutating version of `crinkle_clip!`.
Note that chained calls to `crinkle_clip` won't work.
"""
function crinkle_clip(plotter::MakiePlotter{3,DH,T}, decision_fun) where {DH,T}
physical_coords_m = ShaderAbstractions.Buffer(Makie.Observable(copy(plotter.physical_coords_mesh)))
vis_triangles = ShaderAbstractions.Buffer(Makie.Observable(copy(plotter.vis_triangles)))
plotter_clipped = MakiePlotter{3,DH,T,typeof(plotter.topology),Float32,typeof(plotter.mesh),eltype(plotter.vis_triangles)}(
plotter.dh,
plotter.u,
plotter.topology,
copy(plotter.visible),
plotter.gridnodes,
plotter.physical_coords,
physical_coords_m,
plotter.all_triangles,
vis_triangles,
plotter.triangle_cell_map,
plotter.cell_triangle_offsets,
plotter.reference_coords,
GeometryBasics.Mesh(physical_coords_m,vis_triangles))
crinkle_clip!(plotter_clipped,decision_fun)
return plotter_clipped
end
"""
Total number of vertices
"""
num_vertices(p::MakiePlotter) = size(p.physical_coords,1)
# TODO this looks faulty...think harder.
# Helper to count triangles e.g. for preallocations.
ntriangles(cell::Ferrite.AbstractCell{2,N,3}) where {N} = 1 # Tris in 2D
ntriangles(cell::Ferrite.AbstractCell{3,3,1}) = 1 # Tris in 3D
ntriangles(cell::Ferrite.AbstractCell{dim,N,4}) where {dim,N} = 4 # Quads in 2D and 3D
ntriangles(cell::Ferrite.AbstractCell{3,N,1}) where N = 4 # Tets as a special case of a Quad, obviously :)
ntriangles(cell::Ferrite.AbstractCell{3,N,6}) where N = 6*4 # Hex
"""
Get the vertices represented as a list of coordinates of a cell.
!!! details
**TODO** refactor into Ferrite core.
"""
function vertices(grid::Ferrite.AbstractGrid, cell::Ferrite.AbstractCell{dim,N,M}) where {dim,N,M}
Ferrite.getnodes(grid)[[Ferrite.vertices(cell)...]]
end
"""
decompose!(coord_offset, coord_matrix, ref_coord_matrix, triangle_offset, triangle_matrix, grid, cell::Union{Ferrite.AbstractCell{2,N,3}, Ferrite.AbstractCell{3,3,1}})
Decompose a triangle into a coordinates and a triangle index list to disconnect it properly. Guarantees to preserve orderings and orientations.
"""
function decompose!(coord_offset, coord_matrix, ref_coord_matrix, triangle_offset, triangle_matrix, grid, cell::Union{Ferrite.AbstractCell{2,N,3}, Ferrite.AbstractCell{3,3,1}}) where {N}
for (i,v) in enumerate(vertices(grid, cell))
coord_matrix[coord_offset] = GeometryBasics.Point(Ferrite.getcoordinates(v)...)
ref_coord_matrix[coord_offset,1:2] = Ferrite.reference_coordinates(Ferrite.default_interpolation(typeof(cell)))[i]
triangle_matrix[triangle_offset,i] = coord_offset
coord_offset+=1
end
triangle_offset+=1
(coord_offset, triangle_offset)
end
@doc raw"""
decompose!(coord_offset, coord_matrix::Vector{Point{space_dim,T}}, ref_coord_matrix, triangle_offset, triangle_matrix, grid, cell::Union{Ferrite.AbstractCell{2,N,4}, Ferrite.AbstractCell{3,4,1}})
Decompose a quadrilateral into a coordinates and a triangle index list to disconnect it properly. Guarantees to preserve orderings and orientations.
!!! details
This function takes a CCW ordered quadrilateral, i.e.
```
4-------3
| |
| |
| |
| |
| |
1-------2
```
and creates the decomposition
```
4-------3
| \ C / |
| \ / |
|D 5 B|
| / \ |
| / A \ |
1-------2
```
where A=(1,2,5),B=(2,3,5),C=(3,4,5),D=(4,1,5) are the generated triangles in this order.
"""
function decompose!(coord_offset, coord_matrix::Vector{Point{space_dim,T}}, ref_coord_matrix, triangle_offset, triangle_matrix, grid, cell::Union{Ferrite.AbstractCell{2,N,4}, Ferrite.AbstractCell{3,4,1}}) where {N,space_dim,T}
# A bit more complicated. The default diagonal decomposition into 2 triangles is missing a solution mode.
# To resolve this we make a more expensive decomposition in 4 triangles which correctly displays the solution mode (in linear problems).
coord_offset_initial = coord_offset
vts = vertices(grid, cell)
# Compute coordinate of vertex 5
center = zeros(space_dim)
for v in vts
center += Ferrite.getcoordinates(v)
end
center /= 4.0
# Generate triangles in order
for i = 1:length(vts)
v1 = vts[i]
# next index on the outer chain CCW
i2 = i+1
if i2 > length(vts)
i2 = 1 # dunno how to modulo this correctly :/
end
v2 = vts[i2]
# current vertex
coord_matrix[coord_offset] = GeometryBasics.Point(Ferrite.getcoordinates(v1)...)
ref_coord_matrix[coord_offset, 1:2] = Ferrite.reference_coordinates(Ferrite.default_interpolation(typeof(cell)))[i]
coord_offset+=1
# next vertex in chain
coord_matrix[coord_offset] = GeometryBasics.Point(Ferrite.getcoordinates(v2)...)
ref_coord_matrix[coord_offset, 1:2] = Ferrite.reference_coordinates(Ferrite.default_interpolation(typeof(cell)))[i2]
coord_offset+=1
# center vertex (5)
coord_matrix[coord_offset] = GeometryBasics.Point(center...)
ref_coord_matrix[coord_offset, 1:2] .= ntuple(x->0.0, 2)
coord_offset+=1
# collect indices
triangle_matrix[triangle_offset, :] = (0:2) .+ (coord_offset_initial+3*(i-1))
triangle_offset+=1
end
(coord_offset, triangle_offset)
end
"""
decompose!(coord_offset, coord_matrix, ref_coord_matrix, triangle_offset, triangle_matrix, grid, cell::Ferrite.AbstractCell{3,N,M})
Decompose volumetric objects via their faces.
"""
function decompose!(coord_offset, coord_matrix, ref_coord_matrix, triangle_offset, triangle_matrix, grid, cell::Ferrite.AbstractCell{3,N,M}) where {N,M}
# Just 6 quadrilaterals :)
for face_index ∈ 1:M
face_coord_offset = coord_offset
(coord_offset, triangle_offset) = decompose!(coord_offset, coord_matrix, ref_coord_matrix, triangle_offset, triangle_matrix, grid, linear_face_cell(cell, face_index))
for ci ∈ face_coord_offset:(coord_offset-1)
new_coord = transfer_quadrature_face_to_cell(ref_coord_matrix[ci, 1:2], cell, face_index)
ref_coord_matrix[ci, :] = new_coord
end
end
(coord_offset, triangle_offset)
end
refshape(cell::Ferrite.AbstractCell) = typeof(Ferrite.default_interpolation(typeof(cell))).parameters[2]
x₁(x) = x[1]
x₂(x) = x[2]
x₃(x) = x[3]
l2(x) = LinearAlgebra.norm(x,2)
l1(x) = LinearAlgebra.norm(x,1)
midpoint(cell::Ferrite.AbstractCell{2,N,3}, points) where N = Point2f((1/3) * (points[cell.nodes[1]] + points[cell.nodes[2]] + points[cell.nodes[3]]))
midpoint(cell::Ferrite.AbstractCell{2,N,4}, points) where N = Point2f(0.5 * (points[cell.nodes[1]] + points[cell.nodes[3]]))
midpoint(cell::Ferrite.AbstractCell{3,N,4}, points) where N = Point3f((1/4) * (points[cell.nodes[1]] + points[cell.nodes[2]] + points[cell.nodes[3]] + points[cell.nodes[4]]))
midpoint(cell::Ferrite.AbstractCell{3,N,6}, points) where N = Point3f(0.5 * (points[cell.nodes[1]] + points[cell.nodes[7]]))
"""
postprocess(node_values::Vector{T}) -> T
Takes the nodal dof vector and maps it either to the scalar or to the
euclidean norm (in the vectorial case)
"""
function postprocess(node_values)
dim = length(node_values)
if dim == 1
return node_values[1] #scalar values vectors with length 1
else
return sqrt(sum(node_values.^2))
end
end
function getfieldhandlers(dh::Ferrite.DofHandler,field_name)
names = Ferrite.getfieldnames(dh)
field_idx = Ferrite.find_field.((dh,),names)
ip_field = Ferrite.getfieldinterpolation.((dh,),field_idx)
field_dim_ = Ferrite.getfielddim.((dh,),field_idx)
return [Ferrite.FieldHandler([Ferrite.Field(fname,fip,fdim) for (fname,fip,fdim) in zip(names,ip_field,field_dim_)],Set(1:Ferrite.getncells(dh.grid)))]
end
function getfieldhandlers(dh::Ferrite.MixedDofHandler,field_name)
fhs = Ferrite.FieldHandler[]
for fh in dh.fieldhandlers
for field in fh.fields
if field.name == field_name
push!(fhs,fh)
break
end
end
end
return fhs
end
"""
transfer_solution(plotter::MakiePlotter{dim,DH,T}, u::Vector; field_idx::Int=1, process::Function=FerriteViz.postprocess) where {dim,DH<:Ferrite.AbstractDofHandler,T}
Transfer the solution of a plotter to the tessellated mesh in `dim`.
!!! details
**TODO**: Refactor. This is peak inefficiency.
"""
function transfer_solution(plotter::MakiePlotter{dim,DH,T}, u::Vector; field_name=:u, process::FUN=FerriteViz.postprocess) where {dim,DH<:Ferrite.AbstractDofHandler,T,FUN}
# select objects from plotter
dh = plotter.dh
grid = dh.grid
# field related variables
field_dim = Ferrite.getfielddim(dh, field_name)
val_buffer = zeros(T,field_dim)
val = process(val_buffer)
_processreturn = length(process(val_buffer))
data = fill(NaN, num_vertices(plotter),_processreturn)
for fh in getfieldhandlers(dh,field_name)
ip_field = Ferrite.getfieldinterpolation(fh,field_name)
cellset_ = collect(fh.cellset)
cell_geo_ref = Ferrite.getcells(grid, cellset_[1])
ntriangles(cell_geo_ref) == 0 && continue
ip_geo = Ferrite.default_interpolation(typeof(cell_geo_ref))
pv = Ferrite.PointScalarValues(ip_field, ip_geo)
_transfer_solution!(data,pv,fh,ip_geo,ip_field,cellset_,val_buffer,val,field_name,field_dim,plotter,u,process) #function barrier for ip_field and thus pointvalues
end
return data
end
function _transfer_solution!(data,pv,fh,ip_geo,ip_field,cellset_,val_buffer,val,field_name,field_dim,plotter::MakiePlotter{dim,DH,T}, u::Vector, process::FUN) where {dim,DH<:Ferrite.AbstractDofHandler,T,FUN}
n_vertices_per_tri = 3 # we have 3 vertices per triangle...
dh = plotter.dh
ref_coords = plotter.reference_coords
grid = dh.grid
# actual data
local_dof_range = Ferrite.dof_range(fh, field_name)
_processreturndim = length(process(val_buffer))
cell_geo_ref = Ferrite.getcells(grid, cellset_[1])
Ferrite.reinit!(pv, Ferrite.getcoordinates(grid,cellset_[1]), Tensors.Vec{dim}(ref_coords[1,:]))
n_basefuncs = Ferrite.getnbasefunctions(pv)
_local_coords = Ferrite.getcoordinates(grid,cellset_[1])
_local_celldofs = Ferrite.celldofs(dh,cellset_[1])
_celldofs_field = reshape(@view(_local_celldofs[local_dof_range]), (field_dim, n_basefuncs))
_local_ref_coords = Tensors.Vec{dim}(ref_coords[1,:])
# We just loop over all cells
for (isvisible,(cell_idx,cell_geo)) in zip(plotter.visible,enumerate(Ferrite.getcells(dh.grid)))
# Skip invisible cells and cells which are not in the current cellset
if !isvisible || cell_idx ∉ cellset_
continue
end
# Buffer cell data relevant for the current field to transfer
Ferrite.getcoordinates!(_local_coords,grid,cell_idx)
Ferrite.celldofs!(_local_celldofs,dh,cell_idx)
_celldofs_field = reshape(@view(_local_celldofs[local_dof_range]), (field_dim, n_basefuncs))
# Loop over the triangles of the cell and interpolate at the vertices
# TODO remove redundant function value calls
for triangle_index in triangles_on_cell(plotter, cell_idx)
for current_vertex_index in plotter.all_triangles[triangle_index]
_local_ref_coords = Tensors.Vec{dim}(@view(ref_coords[current_vertex_index,:]))
Ferrite.reinit!(pv, _local_coords, _local_ref_coords)
for d in 1:field_dim
val_buffer[d] = Ferrite.function_value(pv, 1, @views(u[_celldofs_field[d,:]]))
end
val = process(val_buffer)
for d in 1:_processreturndim
data[current_vertex_index, d] = val[d]
end
end
end
end
end
function transfer_scalar_celldata(plotter::MakiePlotter{dim,DH,T}, u::Vector; process::FUN=FerriteViz.postprocess) where {dim,DH<:Ferrite.AbstractDofHandler,T,FUN<:Function}
n_vertices_per_tri = 3 # we have 3 vertices per triangle...
# select objects from plotter
dh = plotter.dh
grid = dh.grid
current_vertex_index = 1
data = fill(0.0, num_vertices(plotter))
for (isvisible,(cell_idx,cell_geo)) in zip(plotter.visible,enumerate(Ferrite.getcells(dh.grid)))
if !isvisible
current_vertex_index += ntriangles(cell_geo)*n_vertices_per_tri
continue
end
ncvertices = ntriangles(cell_geo)*n_vertices_per_tri
for i in 1:ncvertices
data[current_vertex_index] = process(u[cell_idx])
current_vertex_index += 1
end
end
return data::Vector{T}
end
get_gradient_interpolation(::Ferrite.Lagrange{dim,shape,order}) where {dim,shape,order} = Ferrite.DiscontinuousLagrange{dim,shape,order-1}()
get_gradient_interpolation_type(::Type{Ferrite.Lagrange{dim,shape,order}}) where {dim,shape,order} = Ferrite.DiscontinuousLagrange{dim,shape,order-1}
# TODO remove if Knuth's PR on this gets merged (Ferrite PR 552)
getgrid(dh::Ferrite.DofHandler) = dh.grid
function ε(x::Vector{T}) where T
ngrad = length(x)
dim = isqrt(ngrad)
∇u = Tensor{2,dim,T,ngrad}(x)
return symmetric(∇u)
end
"""
_tensorsjl_gradient_accessor(v::Tensors.Vec, field_dim_idx::Int, spatial_dim_idx::Int)
This is a helper to access the correct value in Tensors.jl entities, because the gradient index is the outermost one.
"""
@inline _tensorsjl_gradient_accessor(v::Tensors.Vec{dim}, field_dim_idx::Int, spatial_dim_idx::Int) where {dim} = v[spatial_dim_idx]
@inline _tensorsjl_gradient_accessor(m::Tensors.Tensor{2,dim}, field_dim_idx::Int, spatial_dim_idx::Int) where {dim} = m[field_dim_idx, spatial_dim_idx]
"""
interpolate_gradient_field(dh::DofHandler, u::AbstractVector, field_name::Symbol; copy_fields::Vector{Symbol})
Compute the piecewise discontinuous gradient field for `field_name`. Returns the flux dof handler and the corresponding flux dof values.
If the additional keyword argument `copy_fields` is provided with a non empty `Vector{Symbol}`, the corresponding fields of `dh` will be
copied into the returned flux dof handler and flux dof value vector.
"""
function interpolate_gradient_field(dh::Ferrite.DofHandler{spatial_dim}, u::AbstractVector, field_name::Symbol; copy_fields::Vector{Symbol}=Symbol[]) where {spatial_dim}
# Get some helpers
field_idx = Ferrite.find_field(dh, field_name)
ip = Ferrite.getfieldinterpolation(dh, field_idx)
# Create dof handler for gradient field
dh_gradient = Ferrite.DofHandler(getgrid(dh))
ip_gradient = get_gradient_interpolation(ip)
field_dim = Ferrite.getfielddim(dh,field_name)
Ferrite.add!(dh_gradient, :gradient, field_dim*spatial_dim, ip_gradient) # field dim × spatial dim components
for fieldname in copy_fields
_field_idx = Ferrite.find_field(dh, fieldname)
_ip = Ferrite.getfieldinterpolation(dh, _field_idx)
_field_dim = Ferrite.getfielddim(dh,fieldname)
Ferrite.add!(dh_gradient, fieldname, _field_dim, _ip)
end
Ferrite.close!(dh_gradient)
# FIXME this does not work for mixed grids
ip_geom = Ferrite.default_interpolation(typeof(Ferrite.getcells(getgrid(dh), 1)))
ref_coords_gradient = Ferrite.reference_coordinates(ip_gradient)
qr_gradient = Ferrite.QuadratureRule{spatial_dim, refshape(Ferrite.getcells(getgrid(dh), 1)), Float64}(ones(length(ref_coords_gradient)), ref_coords_gradient)
cv = (field_dim == 1) ? Ferrite.CellScalarValues(qr_gradient, ip, ip_geom) : Ferrite.CellVectorValues(qr_gradient, ip, ip_geom)
# Buffer for the dofs
cell_dofs = zeros(Int, Ferrite.ndofs_per_cell(dh))
cell_dofs_gradient = zeros(Int, Ferrite.ndofs_per_cell(dh_gradient))
# Allocate storage for the fluxes to store
u_gradient = zeros(Ferrite.ndofs(dh_gradient))
# In general uᵉ_gradient is an order 3 tensor [field_dim, spatial_dim, nqp]
uᵉ_gradient = zeros(length(cell_dofs_gradient[Ferrite.dof_range(dh_gradient, :gradient)]))
uᵉshape = (spatial_dim, field_dim, Ferrite.getnquadpoints(cv))
uᵉ_gradient_view = reshape(uᵉ_gradient, uᵉshape)
for (cell_num, cell) in enumerate(Ferrite.CellIterator(dh))
# Get element dofs on parent field
Ferrite.celldofs!(cell_dofs, dh, cell_num)
uᵉ = @views u[cell_dofs[Ferrite.dof_range(dh, field_name)]]
# And initialize cellvalues for the cell to evaluate the gradient at the basis functions
# of the gradient field
Ferrite.reinit!(cv, cell)
# Now we simply loop over all basis functions of the gradient field and evaluate the gradient
for i ∈ 1:Ferrite.getnquadpoints(cv)
uᵉgradi = Ferrite.function_gradient(cv, i, uᵉ)
for ds in 1:spatial_dim
for df in 1:field_dim
uᵉ_gradient_view[ds, df, i] = _tensorsjl_gradient_accessor(uᵉgradi, df, ds)
end
end
end
# We finally write back the result to the global dof vector of the gradient field
Ferrite.celldofs!(cell_dofs_gradient, dh_gradient, cell_num)
u_gradient[cell_dofs_gradient[Ferrite.dof_range(dh_gradient, :gradient)]] .+= uᵉ_gradient
# copy all requested fields
for fieldname in copy_fields
u_gradient[cell_dofs_gradient[Ferrite.dof_range(dh_gradient, fieldname)]] .= u[cell_dofs[Ferrite.dof_range(dh, fieldname)]]
end
end
return dh_gradient, u_gradient
end
# maps the dof vector in nodal order, only needed for wireframe nodal deformation (since we display the original nodes)
function dof_to_node(dh::Ferrite.AbstractDofHandler, u::Vector{T}; field_name=:u) where T
field_dim = Ferrite.getfielddim(dh, field_name)
data = fill(NaN, Ferrite.getnnodes(dh.grid), field_dim)
fhs = getfieldhandlers(dh,field_name)
for fh in fhs
dof_range_ = Ferrite.dof_range(fh, field_name)
for cell in Ferrite.CellIterator(dh,fh.cellset)
_celldofs = Ferrite.celldofs(cell)
local_celldofs_field = reshape(@view(_celldofs[dof_range_]), (field_dim,length(cell.nodes)))
for (local_nodeid,node) in enumerate(cell.nodes)
for d in 1:field_dim
data[node, d] = u[local_celldofs_field[d,local_nodeid]]
end
end
end
end
return data
end
"""
transfer_quadrature_face_to_cell(point::AbstractVector, cell::Ferrite.AbstractCell{3,N,4}, face::Int)
Mapping from 2D triangle to 3D face of a tetrahedon.
"""
function transfer_quadrature_face_to_cell(point::AbstractVector, cell::Ferrite.AbstractCell{3,N,4}, face::Int) where {N}
x,y = point
face == 1 && return [ 1-x-y, y, 0]
face == 2 && return [ y, 0, 1-x-y]
face == 3 && return [ x, y, 1-x-y]
face == 4 && return [ 0, 1-x-y, y]
end
"""
transfer_quadrature_face_to_cell(point::AbstractVector, cell::Ferrite.AbstractCell{3,N,6}, face::Int)
Mapping from 2D quadrilateral to 3D face of a hexahedron.
"""
function transfer_quadrature_face_to_cell(point::AbstractVector, cell::Ferrite.AbstractCell{3,N,6}, face::Int) where {N}
x,y = point
face == 1 && return [ y, x, -1]
face == 2 && return [ x, -1, y]
face == 3 && return [ 1, x, y]
face == 4 && return [-x, 1, y]
face == 5 && return [-1, y, x]
face == 6 && return [ x, y, 1]
end
"""
uniform_refinement(plotter::MakiePlotter)
uniform_refinement(plotter::MakiePlotter, num_refinements::Int)
Generates 3 triangles for each triangle by adding a center vertex and connecting them (orientation preserving).
!!! danger
This method has high RAM usage!
!!! info
This function currently does not increase the resolution of the geometrical points in space, only the solution quality!
!!! details
**TODO** investigate whether it is possible to eliminate the coordinate duplication without trashing the caches
"""
function uniform_refinement(plotter::MakiePlotter{dim,DH,T1,TOP,T2,M,TRI}) where {dim,DH,T1,TOP,T2,M,TRI}
# Number of triangles in the unrefined mesh
total_triangles = length(plotter.all_triangles)
# TODO keep the old reference coordinates and just add one new per triangle for the new center vertex
# refined_reference_coords = Matrix{T1}(undef, size(plotter.reference_coords, 1) + total_triangles, dim)
refined_reference_coords = Matrix{T1}(undef, 4*3*total_triangles, dim) # 4 new triangles with 3 vertices each
refined_physical_coords = Vector{GeometryBasics.Point{dim,T2}}(undef, 4*3*total_triangles) # 4 new triangles with 3 vertices each
refined_triangle_cell_map = Vector{Int}(undef, 4*total_triangles)
refined_triangles = Matrix{Int}(undef, 4*total_triangles, 3)
for triangle_index ∈ 1:total_triangles
current_triangle = plotter.all_triangles[triangle_index]
# Compute midpoint of reference coordinates
current_ref_coordinates = @view plotter.reference_coords[current_triangle, :]
midpoint_ref_coordinate_12 = (current_ref_coordinates[1,:]+current_ref_coordinates[2,:])/2.0
midpoint_ref_coordinate_23 = (current_ref_coordinates[2,:]+current_ref_coordinates[3,:])/2.0
midpoint_ref_coordinate_31 = (current_ref_coordinates[3,:]+current_ref_coordinates[1,:])/2.0
# Setup reference coordinates
# Triangle 1
refined_reference_coords[4*3*(triangle_index-1)+1, :] = current_ref_coordinates[1, :]
refined_reference_coords[4*3*(triangle_index-1)+2, :] = midpoint_ref_coordinate_12
refined_reference_coords[4*3*(triangle_index-1)+3, :] = midpoint_ref_coordinate_31
# Triangle 2
refined_reference_coords[4*3*(triangle_index-1)+4, :] = current_ref_coordinates[2, :]
refined_reference_coords[4*3*(triangle_index-1)+5, :] = midpoint_ref_coordinate_23
refined_reference_coords[4*3*(triangle_index-1)+6, :] = midpoint_ref_coordinate_12
# Triangle 3
refined_reference_coords[4*3*(triangle_index-1)+7, :] = current_ref_coordinates[3, :]
refined_reference_coords[4*3*(triangle_index-1)+8, :] = midpoint_ref_coordinate_31
refined_reference_coords[4*3*(triangle_index-1)+9, :] = midpoint_ref_coordinate_23
# Triangle 4
refined_reference_coords[4*3*(triangle_index-1)+10, :] = midpoint_ref_coordinate_12
refined_reference_coords[4*3*(triangle_index-1)+11, :] = midpoint_ref_coordinate_23
refined_reference_coords[4*3*(triangle_index-1)+12, :] = midpoint_ref_coordinate_31
# TODO use geometric interpolation here!
# Compute vertex position in physical space at midpoint
current_physical_coordinates = @view plotter.physical_coords[current_triangle]
midpoint_physical_coordinate_12 = (current_physical_coordinates[1]+current_physical_coordinates[2])/2.0
midpoint_physical_coordinate_23 = (current_physical_coordinates[2]+current_physical_coordinates[3])/2.0
midpoint_physical_coordinate_31 = (current_physical_coordinates[3]+current_physical_coordinates[1])/2.0
# Setup physical coordinates
# Triangle 1
refined_physical_coords[4*3*(triangle_index-1)+1] = current_physical_coordinates[1]
refined_physical_coords[4*3*(triangle_index-1)+2] = midpoint_physical_coordinate_12
refined_physical_coords[4*3*(triangle_index-1)+3] = midpoint_physical_coordinate_31
# Triangle 2
refined_physical_coords[4*3*(triangle_index-1)+4] = current_physical_coordinates[2]
refined_physical_coords[4*3*(triangle_index-1)+5] = midpoint_physical_coordinate_23
refined_physical_coords[4*3*(triangle_index-1)+6] = midpoint_physical_coordinate_12
# Triangle 3
refined_physical_coords[4*3*(triangle_index-1)+7] = current_physical_coordinates[3]
refined_physical_coords[4*3*(triangle_index-1)+8] = midpoint_physical_coordinate_31
refined_physical_coords[4*3*(triangle_index-1)+9] = midpoint_physical_coordinate_23
# Triangle 4
refined_physical_coords[4*3*(triangle_index-1)+10] = midpoint_physical_coordinate_12
refined_physical_coords[4*3*(triangle_index-1)+11] = midpoint_physical_coordinate_23
refined_physical_coords[4*3*(triangle_index-1)+12] = midpoint_physical_coordinate_31
# Setup inverse mapping
refined_triangle_cell_map[(4*(triangle_index-1)+1):4*triangle_index] .= plotter.triangle_cell_map[triangle_index]
# Setup new triangles
refined_triangles[4*(triangle_index-1)+1, :] = [4*3*(triangle_index-1)+1, 4*3*(triangle_index-1)+2, 4*3*(triangle_index-1)+3]
refined_triangles[4*(triangle_index-1)+2, :] = [4*3*(triangle_index-1)+4, 4*3*(triangle_index-1)+5, 4*3*(triangle_index-1)+6]
refined_triangles[4*(triangle_index-1)+3, :] = [4*3*(triangle_index-1)+7, 4*3*(triangle_index-1)+8, 4*3*(triangle_index-1)+9]
refined_triangles[4*(triangle_index-1)+4, :] = [4*3*(triangle_index-1)+10, 4*3*(triangle_index-1)+11, 4*3*(triangle_index-1)+12]
end
refined_triangles = Makie.to_triangles(refined_triangles)
refined_vis_triangles = copy(refined_triangles)
n_visible = sum(plotter.visible[refined_triangle_cell_map])
n_notvisible = length(refined_triangles) - n_visible
refined_vis_triangles[ .! plotter.visible[refined_triangle_cell_map]] .= (GeometryBasics.GLTriangleFace(1,1,1) for i in 1:n_notvisible)
refined_vis_triangles = ShaderAbstractions.Buffer(Makie.Observable(refined_triangles))
refined_physical_coords_m = ShaderAbstractions.Buffer(Makie.Observable(copy(refined_physical_coords)))
refined_mesh = GeometryBasics.Mesh(refined_physical_coords_m, refined_vis_triangles)
return MakiePlotter{dim,DH,T1,TOP,T2,M,TRI}(
plotter.dh, plotter.u, plotter.topology, plotter.visible, plotter.gridnodes,
refined_physical_coords, refined_physical_coords_m, refined_triangles, refined_vis_triangles, refined_triangle_cell_map,
plotter.cell_triangle_offsets .* 4, refined_reference_coords, refined_mesh
)
end
function uniform_refinement(plotter::MakiePlotter{dim,DH,T1,TOP,T2,M,TRI}, num_refinements::Int) where {dim,DH,T1,TOP,T2,M,TRI}
num_refinements == 0 && return plotter
new_plotter = uniform_refinement(plotter)
new_plotter = uniform_refinement(new_plotter, num_refinements-1)
return new_plotter
end
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | code | 8898 | using FerriteViz, Ferrite
using Test
# _test_tolerance(ip::Interpolation{<:Any,<:Any,1}) = 5e-1
_test_tolerance(ip::Interpolation) = 1e-6 # Float32 computations are involved!
struct MatrixValued <: Ferrite.FieldTrait end
function Ferrite.function_value(::MatrixValued, fe_v::Ferrite.Values{dim}, q_point::Int, u::AbstractVector) where {dim}
n_base_funcs = Ferrite.getn_scalarbasefunctions(fe_v)
length(u) == n_base_funcs*dim^2 || Ferrite.throw_incompatible_dof_length(length(u), n_base_funcs)
@boundscheck Ferrite.checkquadpoint(fe_v, q_point)
val = zero(Tensor{2, dim})
@inbounds for I ∈ 1:n_base_funcs*dim^2
# First flatten to vector
i0, c0 = divrem(I - 1, dim^2)
i = i0 + 1
v = Ferrite.shape_value(fe_v, q_point, i)
# Then compute matrix index
ci0, cj0 = divrem(c0, dim)
ci = ci0 + 1
cj = cj0 + 1
val += Ferrite.Tensor{2, dim}((k, l) -> k == ci && l == cj ? v*u[I] : zero(v))
end
return val
end
@testset "utility operations" begin
# Check scalar problems
for (num_elements_per_dim, geo, ip) ∈ [
# (4,Triangle, Lagrange{2,RefTetrahedron,1}()),
(2,Triangle, Lagrange{2,RefTetrahedron,2}()),
(2,Triangle, Lagrange{2,RefTetrahedron,3}()),
# (5,Tetrahedron, Lagrange{3,RefTetrahedron,1}()),
(2,Tetrahedron, Lagrange{3,RefTetrahedron,2}()),
# (4,Quadrilateral, Lagrange{2,RefCube,1}()),
(2,Quadrilateral, Lagrange{2,RefCube,2}()),
# (4,Hexahedron, Lagrange{3,RefCube,1}()),
(2,Hexahedron, Lagrange{3,RefCube,2}())
]
@testset "scalar($num_elements_per_dim, $geo, $ip)" begin
# Get solution
dim = Ferrite.getdim(ip)
grid = generate_grid(geo, ntuple(x->num_elements_per_dim, dim));
dh = DofHandler(grid)
add!(dh, :u, ip)
close!(dh);
u = Vector{Float64}(undef, ndofs(dh))
f_ana(x::Union{Vec{2},FerriteViz.GeometryBasics.Point{2}}) = 0.5x[1]^2 - 2x[2]^2 + x[1]*x[2]
f_ana(x::Union{Vec{3},FerriteViz.GeometryBasics.Point{3}}) = -x[1]^2 + 0.3*x[2]^2 + 2*x[3]^2 + 5x[1]*x[2] - 2x[1]*x[3] + 0.1x[3]*x[2]
Ferrite.apply_analytical!(u, dh, :u, f_ana)
@testset "solution fields" begin
plotter = FerriteViz.MakiePlotter(dh,u)
data = FerriteViz.transfer_solution(plotter,u,process=x->x)[:,1]
visible_nodes = .!isnan.(data)# TODO add API
@test all(isapprox.(data[visible_nodes], f_ana.(plotter.physical_coords[visible_nodes]); atol=_test_tolerance(ip)))
end
# Compute gradient/flux field
@testset "gradient fields" begin
(dh_grad, u_grad) = FerriteViz.interpolate_gradient_field(dh, u, :u)
# Check gradient of solution
@testset "interpolate_gradient_field" begin
qr = QuadratureRule{dim,Ferrite.getrefshape(ip)}(2) # TODO sample random point
ip_geo = Ferrite.default_interpolation(geo)
ip_grad = Ferrite.getfieldinterpolation(dh_grad, Ferrite.find_field(dh_grad, :gradient))
cellvalues_grad = Ferrite.CellVectorValues(qr, ip_grad, ip_geo)
for cell in CellIterator(dh_grad)
reinit!(cellvalues_grad, cell)
coords = getcoordinates(cell)
uₑ = @views u_grad[celldofs(cell)]
for q_point in 1:getnquadpoints(cellvalues_grad)
x = spatial_coordinate(cellvalues_grad, q_point, coords)
uₐₚₚᵣₒₓ = function_value(cellvalues_grad, q_point, uₑ)
uₐₙₐ = Tensors.gradient(f_ana, x)
@test all(isapprox.(uₐₙₐ, uₐₚₚᵣₒₓ;atol=_test_tolerance(ip)))
end
end
end
# Check for correct transfer
@testset "transfer_solution" begin
plotter_grad = FerriteViz.MakiePlotter(dh_grad,u_grad)
data_grad = FerriteViz.transfer_solution(plotter_grad,u_grad; field_name=:gradient, process=x->x)
visible_nodes_grad = .!isnan.(data_grad)
for i ∈ 1:size(data_grad, 1)
!visible_nodes_grad[i] && continue
x = Vec{dim,Float64}(plotter_grad.physical_coords[i])
∇uₐₚₚᵣₒₓ = Vec{dim,Float64}(data_grad[i,:])
∇uₐₙₐ = Tensors.gradient(f_ana, x)
@test all(isapprox.(∇uₐₚₚᵣₒₓ, ∇uₐₙₐ; atol=_test_tolerance(ip)))
end
end
end
end
@testset "vector($num_elements_per_dim, $geo, $ip)" begin
# Get solution
dim = Ferrite.getdim(ip)
grid = generate_grid(geo, ntuple(x->num_elements_per_dim, dim));
dh = DofHandler(grid)
add!(dh, :u, dim, ip)
close!(dh);
# Some test functions with rather complicated gradients
f_ana(x::Union{Vec{3},FerriteViz.GeometryBasics.Point{3}}) = Vec{3}((
-x[1]^2 + 0.3*x[2]^2 + 2*x[3]^2 + 5x[1]*x[2] - 2x[1]*x[3] + 0.1x[3]*x[2],
x[1]^2 - 0.3*x[2]^2 + 1*x[3]^2 - 5x[1]*x[2] + 2x[1]*x[3] ,
1.3*x[2]^2 - 2*x[3]^2 + 5x[1]*x[2] - 0.7x[1]*x[3] - 0.1x[3]*x[2],
))
f_ana(x::Union{Vec{2},FerriteViz.GeometryBasics.Point{2}}) = Vec{2}((
-x[1]^2 + 0.3*x[2]^2 + 5x[1]*x[2],
x[1]^2 + 2.3*x[2]^2 - 0.1x[1]*x[2],
))
u = Vector{Float64}(undef, ndofs(dh))
Ferrite.apply_analytical!(u, dh, :u, f_ana)
@testset "solution fields" begin
plotter = FerriteViz.MakiePlotter(dh,u)
data = FerriteViz.transfer_solution(plotter,u,process=x->x)
visible_nodes = .!isnan.(data)# TODO add API
for i ∈ 1:size(data, 1)
!visible_nodes[i] && continue
uₐₚₚᵣₒₓ = Vec{dim}(data[i,:])
uₐₙₐ = f_ana(Vec{dim}(plotter.physical_coords[i]))
@test all(isapprox.(uₐₚₚᵣₒₓ, uₐₙₐ; atol=_test_tolerance(ip)))
end
end
# Compute gradient/flux field
@testset "gradient fields" begin
(dh_grad, u_grad) = FerriteViz.interpolate_gradient_field(dh, u, :u)
# Check gradient of solution
@testset "interpolate_gradient_field" begin
qr = QuadratureRule{dim,Ferrite.getrefshape(ip)}(2) # TODO sample random point
ip_geo = Ferrite.default_interpolation(geo)
ip_grad = Ferrite.getfieldinterpolation(dh_grad, Ferrite.find_field(dh_grad, :gradient))
cellvalues_grad = Ferrite.CellScalarValues(qr, ip_grad, ip_geo)
for cell in CellIterator(dh_grad)
reinit!(cellvalues_grad, cell)
coords = getcoordinates(cell)
uₑ = @views u_grad[celldofs(cell)]
for q_point in 1:getnquadpoints(cellvalues_grad)
x = spatial_coordinate(cellvalues_grad, q_point, coords)
∇uₐₚₚᵣₒₓ = function_value(MatrixValued(), cellvalues_grad, q_point, uₑ)
∇uₐₙₐ = Tensors.gradient(f_ana, x)
@test all(isapprox.(∇uₐₙₐ, ∇uₐₚₚᵣₒₓ;atol=_test_tolerance(ip)))
end
end
end
# Check for correct transfer
@testset "transfer_solution" begin
plotter_grad = FerriteViz.MakiePlotter(dh_grad,u_grad)
data_grad = FerriteViz.transfer_solution(plotter_grad,u_grad; field_name=:gradient, process=x->x)
visible_nodes_grad = .!isnan.(data_grad)
for i ∈ 1:size(data_grad, 1)
!visible_nodes_grad[i] && continue
x = Vec{dim,Float64}(plotter_grad.physical_coords[i])
# Transpose because constructed from Vector and not from Tuple :)
∇uₐₚₚᵣₒₓ = transpose(Tensor{2,dim,Float64,2*dim}(data_grad[i,:]))
∇uₐₙₐ = Tensors.gradient(f_ana, x)
@test all(isapprox.(∇uₐₙₐ, ∇uₐₚₚᵣₒₓ; atol=_test_tolerance(ip)))
end
end
end
end
end
end
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | docs | 5567 | # FerriteViz.jl changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.2.2] - 2023-11-10
### Added
- uniform refinement for high-order solutions ([#97][github-97])
- dependabot for GitHub actions ([#101][github-101])
- attempt to increase internal machinery test coverage ([#104][github-104])
### Modified
- `README.md` improvements with example gifs ([#96][github-96])
- CI trigger only for PRs and master ([#105][github-105])
- update docs to Documenter v1 ([#106][github-106])
- update Makie in docs to v0.19.12 ([#109][github-109])
### Fixed
- 0 `ntriangles` for empty domains ([#92][github-92])
- correct link for plasticity example ([#93][github-93])
- colorbar for 0 values and `ferriteviewer` deformation default changed to false ([#95][github-95])
## [0.2.1] - 2023-05-24
### Added
- Basic culling where all faces of all boundary elements are rendered ([#56][github-56]).
- Citation file ([#65](github-65))
- Support for MixedDofHandler ([#70][github-70])
### Modified
- Removed unnecessary extra dispatches for three-dimensional case ([#56][github-56]).
- function barrier for `transfer_solution` such that its closer to type groundedness ([#68][github-68]).
- `MakiePlotter` holds now `ShaderAbstractions.Buffer`s ([#69][github-69])
- triangles are now stored in `Buffer`s with Observables
- triangle coords are now `Buffers`s with Observables
- replace overcomplicated ternary operators by `begin end` expressions ([#69][github-69])
- remove unused functions ([#69][github-69])
- default linear rendering of high order triangles ([#83][github-83])
- keyword argument `copy_fields` added to `interpolate_gradient_field` ([#83][github-83])
### Fixed
- Renamed `Crincle` to `Crinkle` ([#56][github-56]).
- wireframe plot could not selectively disable the plotting of the nodes ([#83][github-83])
- let CI error if example block errors ([#71][github-71])
- removed bug in `transfer_solution` from ([#70][github-70]) in ([#89][github-89])
- fix JSServe documentation issue ([#85][github-85])
## [0.2.0] - 2023-03-06
### Added
- Functionality to obtain a first-order refined mesh and the corresponding
dof handler and solution to approximately visualize high order solutions ([#57][github-57]).
- Subtitles for the tutorial to find useful stuff faster ([#57][github-57]).
- Crincle clip in 3D ([#56][github-56]), which basically removes all elements above some surface,
which can be described by a function, from visualization.
- `ClipPlane` to describe planar surfaces in 3D ([#56][github-56]).
- Docs and helper for gradient field visualization based on interpolation ([#51][github-51]).
Currently only useful in 2D, because we have no clip plane feature to introspect the interior
of 3D problems.
- Manufactured heat problem to test correctness of gradient field computation and as a
helper to generate scalar-valued solutions with different ansatz ([#51][github-51]).
### Modified
- Incompressible elasticity solver now takes the Ansatz functions and the actual material
parameters instead of the poisson number the Ansatz functions ([#51][github-51]).
### Fixed
- Visualization of non-conforming solution fields in 3D ([#59][github-59]).
- An unknown bug has been fixed, which computes the colorbar `(min,max)` wrong. Now the `max` is
set to be `1.01` of `min` guaranteeing that the value is larger than `min` if close to zero ([#51][github-51]).
- Update Makie dependencies to fix some visualization bugs ([#51][github-51]).
[github-51]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/51
[github-56]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/56
[github-57]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/57
[github-59]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/59
[github-65]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/65
[github-63]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/63
[github-68]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/68
[github-69]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/69
[github-70]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/70
[github-71]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/71
[github-83]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/83
[github-85]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/85
[github-89]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/89
[github-92]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/92
[github-93]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/93
[github-95]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/95
[github-96]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/96
[github-97]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/97
[github-101]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/101
[github-104]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/104
[github-105]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/105
[github-106]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/106
[github-109]: https://github.com/Ferrite-FEM/FerriteViz.jl/pull/109
[Unreleased]: https://github.com/Ferrite-FEM/FerriteViz.jl/compare/v0.2.2...HEAD
[0.2.2]: https://github.com/Ferrite-FEM/FerriteViz.jl/compare/v0.2.1...v0.2.2
[0.2.1]: https://github.com/Ferrite-FEM/FerriteViz.jl/compare/v0.2.1...v0.2.0
[0.2.0]: https://github.com/Ferrite-FEM/FerriteViz.jl/compare/v0.2.0...v0.1.4
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | docs | 2952 | # FerriteViz.jl
[](https://github.com/ferrite-fem/FerriteViz.jl/actions)
[![][docs-dev-img]][docs-dev-url]
[docs-dev-img]: https://img.shields.io/badge/docs-dev-blue.svg
[docs-dev-url]: http://ferrite-fem.github.io/FerriteViz.jl/dev/
Small package to visualize your [Ferrite.jl](https://github.com/Ferrite-FEM/Ferrite.jl) results. Currently supports only Makie,
but the goal is to extend it for different plotting packages.
The package is highly experimental and breaking changes about the internal machinery are about to come.
Likely, only a small fraction of the interface will change over time.
## Installation
```julia
pkg> add FerriteViz
```
## Usage
Simply grab your solution vector and the corresponding dof handler to create a plotter:
```julia
import FerriteViz, GLMakie
dh, u = solve_problem()
plotter = MakiePlotter(dh, u)
FerriteViz.solutionplot(plotter)
```
For a guide check out [the tutorial section](https://ferrite-fem.github.io/FerriteViz.jl/dev/tutorial.html) - or just enjoy the gallery below!
## Features
- `solutionplot` FE solution contour plot on arbitrary finite element mesh (in Makie called `mesh` plots)
- `ferriteviewer` viewer with toggles and menus that update the plot
- `wireframe` plots the finite element mesh and optionally labels nodes and cells
- `arrows` - also called `quiver` plots, in paraview `glyph` filter
- `surface` 2D solutions in 3D space as surface, in paraview `warp by scalar` filter
- synchronous plotting while your simulation runs with any of the above listed options
- mutating versions of the above listed functions (except for the viewer)
- deformed plots available for `solutionplot` and `wireframe`
- full integration into the Makie ecosystem, e.g. themes, layouts etc.
- GPU powered plotting with GLMakie.jl, jupyter/pluto notebook plotting with WGLMakie.jl and vector graphics with CairoMakie.jl
## Missing Features
- correct visualization of nonlinear geometry faces/edges
- visualization of boundary conditions
- subdomain entity plotting, e.g. facesets, edgesets and so on
- ...
For a detailed list of planned features take a look into the [issue tracker](https://github.com/Ferrite-FEM/FerriteViz.jl/issues?q=is%3Aopen+is%3Aissue+label%3Aenhancement).
Helping hands are always welcome.
Just join the discussion in the corresponding issues.
## Gallery
Pulling the Ferrite.jl logo with a [cohesive zone material model](https://github.com/kimauth/FerriteCohesiveZones.jl).


Credits to [Kim Auth](https://github.com/kimauth/)
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | docs | 611 | # API Reference
On this page the docs of the provided functions are listed
```@docs
FerriteViz.MakiePlotter
FerriteViz.solutionplot
FerriteViz.solutionplot!
FerriteViz.cellplot
FerriteViz.cellplot!
FerriteViz.wireframe
FerriteViz.wireframe!
FerriteViz.arrows
FerriteViz.arrows!
FerriteViz.surface
FerriteViz.surface!
FerriteViz.elementinfo
FerriteViz.elementinfo!
FerriteViz.ferriteviewer
FerriteViz.update!
FerriteViz.for_discretization
FerriteViz.for_interpolation
FerriteViz.interpolate_gradient_field
FerriteViz.uniform_refinement
FerriteViz.crinkle_clip!
FerriteViz.crinkle_clip
FerriteViz.ClipPlane
```
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | docs | 9679 | # Advanced Topics
```@example 1
import JSServe # hide
JSServe.Page() # hide
```
## Gradient field visualization
FerriteViz also makes it easy to visualize gradient fields, like for example strain or stress fields.
A common approach to visualize stresses and strains is to compute the L2 projection onto a H1 field and plot this.
However, a big downside is that we loose the ability to investigate the jumps between elements, as they get smoothed out, hiding possible issues in the solution.
Therefore, we provide the ability to interpolate the gradient into a piecewise discontinuous field via `FerriteViz.interpolate_gradient_field`.
This function may be moved to Ferrite in the future.
In this quick example we show how to visualize strains and stresses side-by-side
```@example 1
using Ferrite
import FerriteViz
using FerriteViz: ε
import WGLMakie #activating the backend, switch to GLMakie or CairoMakie (for 2D) locally
include("ferrite-examples/incompressible-elasticity.jl") #defines dh_linear, dh_quadratic, u_linear, u_quadratic and mp
(dh_linear_grad, u_linear_grad) = FerriteViz.interpolate_gradient_field(dh_linear, u_linear, :u)
(dh_quadratic_grad, u_quadratic_grad) = FerriteViz.interpolate_gradient_field(dh_quadratic, u_quadratic, :u)
plotter_linear = FerriteViz.MakiePlotter(dh_linear_grad, u_linear_grad)
plotter_quadratic = FerriteViz.MakiePlotter(dh_quadratic_grad, u_quadratic_grad)
σ(∇u) = 2*mp.G*dev(ε(∇u)) + mp.K*tr(ε(∇u))*ones(ε(∇u)) #helper function to map gradient to stress
cmap = :jet
f = WGLMakie.Figure()
axs = [WGLMakie.Axis(f[1, 1], title="Strain norm (linear)"),WGLMakie.Axis(f[1, 2], title="Stress norm (linear)"),WGLMakie.Axis(f[1, 3], title="Pressure (deformed, linear)"),
WGLMakie.Axis(f[3, 1], title="Strain norm (quadratic)"),WGLMakie.Axis(f[3, 2], title="Stress norm (quadratic)"),WGLMakie.Axis(f[3, 3], title="Pressure (deformed, quadratic)")]
p1 = FerriteViz.solutionplot!(axs[1], plotter_linear, process=∇u->norm(ε(∇u)), colormap=cmap, field=:gradient)
p2 = FerriteViz.solutionplot!(axs[2], plotter_linear, process=∇u->norm(σ(∇u)), colormap=cmap, field=:gradient)
p3 = FerriteViz.solutionplot!(axs[3], dh_linear, u_linear, field=:p, deformation_field=:u, colormap=cmap)
f[2,1] = WGLMakie.Colorbar(f[1,1], p1, vertical=false)
f[2,2] = WGLMakie.Colorbar(f[1,2], p2, vertical=false)
f[2,3] = WGLMakie.Colorbar(f[1,3], p3, vertical=false)
p4 = FerriteViz.solutionplot!(axs[4], plotter_quadratic, process=∇u->norm(ε(∇u)), colormap=cmap, field=:gradient)
p5 = FerriteViz.solutionplot!(axs[5], plotter_quadratic, process=∇u->norm(σ(∇u)), colormap=cmap, field=:gradient)
p6 = FerriteViz.solutionplot!(axs[6], dh_quadratic, u_quadratic, field=:p, deformation_field=:u, colormap=cmap)
f[4,1] = WGLMakie.Colorbar(f[3,1], p1, vertical=false)
f[4,2] = WGLMakie.Colorbar(f[3,2], p2, vertical=false)
f[4,3] = WGLMakie.Colorbar(f[3,3], p3, vertical=false)
f
```
An alternative to this approach is to compute gradient quantities at samples points and plot these via `arrows`.
## High-order fields
The investigation of high-order fields is currently only supported via a first-order refinment of the problem.
Here, the high-order approximation is replaced by a first order approximation of the field, which is
spanned by the nodes of the high-order approximation. For example, the first order refinement of a
heat problem on a square domain for Lagrange polynomials of order 5 looks like this:
```@example 1
include("ferrite-examples/heat-equation.jl"); #defines manufactured_heat_problem
f = WGLMakie.Figure()
axs = [WGLMakie.Axis3(f[1, 1], title="Coarse"), WGLMakie.Axis3(f[1, 2], title="Fine")]
dh,u = manufactured_heat_problem(Triangle, Lagrange{2,RefTetrahedron,5}(), 1)
dh_for,u_for = FerriteViz.for_discretization(dh, u)
plotter_for = FerriteViz.MakiePlotter(dh_for, u_for)
FerriteViz.surface!(axs[1], plotter_for)
dh,u = manufactured_heat_problem(Triangle, Lagrange{2,RefTetrahedron,5}(), 3)
dh_for,u_for = FerriteViz.for_discretization(dh, u)
plotter_for = FerriteViz.MakiePlotter(dh_for, u_for)
FerriteViz.surface!(axs[2], plotter_for)
f
```
Note that this method produces small artifacts due to the flattening of the nonlinearities of the high order ansatz.
However, it is still sufficient to investigate important features of the solution.
If users want to have higher resolution than the crude estimate given by the first order refinenement (as well as enough RAM), then we also provide a uniform tessellation algorithm which can be used instead
```@example 1
include("ferrite-examples/heat-equation.jl"); #defines manufactured_heat_problem
f = WGLMakie.Figure()
axs = [WGLMakie.Axis3(f[1, 1], title="Coarse"), WGLMakie.Axis3(f[1, 2], title="Fine")]
dh, u = manufactured_heat_problem(Hexahedron, Lagrange{3,RefCube,2}(), 2);
plotter = FerriteViz.MakiePlotter(dh,u);
clip_plane = FerriteViz.ClipPlane(Ferrite.Vec((0.0,0.5,0.5)), 0.1);
clipped_plotter = FerriteViz.crinkle_clip(plotter, clip_plane);
FerriteViz.solutionplot!(axs[1], clipped_plotter)
fine_clipped_plotter = FerriteViz.uniform_refinement(clipped_plotter, 4);
FerriteViz.solutionplot!(axs[2], fine_clipped_plotter)
f
```
In future we will also provide an adaptive tessellation algorithm to resolve the high-order fields with full detail.
## Live plotting
Plotting while a computational heavy simulation is performed can be easily achieved with FerriteViz.jl.
Every plotter object of type `MakiePlotter` holds a property called `u` which is a so called `Observable`.
If an `Observable` changes, all its dependencies are triggered to change as well. So, all we need to do is to update
the observable `plotter.u`.
For this purpose the function [`FerriteViz.update!`](@ref) is provided. It takes a `plotter:MakiePlotter`
and a new solutiuon vector `u_new` and updates `plotter.u`, thereby all open plots called with `plotter` are updated.
A summary of the needed steps for live plotting:
1. Create a plotter before your time stepping begins
2. Call a plot or the `ferriteviewer` and save the return in a variable, e.g. `fig`
3. `display(fig)` in order to force the plot/viewer to pop up, even if its called inside a function body
4. `FerriteViz.update!(plotter,u_new)` where `u_new` corresponds to your new solution of the time step
As an illustrative example, let's consider a slightly modified [plasticity example of Ferrite.jl](https://github.com/Ferrite-FEM/FerriteViz.jl/blob/master/docs/src/ferrite-examples/plasticity-live.jl).
For the full source code, please refer to the link. In the following code we only highlight the necessary changes.
```julia
function solve(liveplotting=false)
# set up your problem
# lots of code
dh = create_dofhandler(grid, interpolation) #helper function from script file
n_dofs = ndofs(dh) # total number of dofs
u = zeros(n_dofs)
if liveplotting
####### Here we take care of the conceptual steps 1, 2 and 3 #######
plotter = MakiePlotter(dh,u)
fig = ferriteviewer(plotter)
display(fig)
####################################################################
end
Δu = zeros(n_dofs) # displacement correction
r = zeros(n_dofs) # residual
K = create_sparsity_pattern(dh); # tangent stiffness matrix
nqp = getnquadpoints(cellvalues)
states = [[MaterialState() for _ in 1:nqp] for _ in 1:getncells(grid)]
# Newton-Raphson loop
NEWTON_TOL = 1 # 1 N
for timestep in 1:n_timesteps
while true; newton_itr += 1
if newton_itr > 8
error("Reached maximum Newton iterations, aborting")
break
end
K, r = doassemble(cellvalues, facevalues, K, grid, dh, material, u,
states, traction);
norm_r = norm(r[Ferrite.free_dofs(dbcs)])
if norm_r < NEWTON_TOL
break
end
apply_zero!(K, r, dbcs)
Δu = Symmetric(K) \ r
u -= Δu
end
if liveplotting
####### Step 4 updating the current solution vector in plotter #######
FerriteViz.update!(plotter,u)
######################################################################
sleep(0.1)
end
# Update all the material states after we have reached equilibrium
for cell_states in states
foreach(update_state!, cell_states)
end
u_max[timestep] = max(abs.(u)...) # maximum displacement in current timestep
end
# postprocessing
# lots of code
return u, dh, traction_magnitude
end
u, dh, traction_magnitude = solve();
```
Note that we create `plotter::MakiePlotter` object before the time stepping begins, as well as calling `ferriteviewer` on the `plotter`.
The next function call is crucial to get the live plotting working. `display(fig)` forces the viewer to pop up, even if it's inside a function body.
Now, the only missing piece is the `FerriteViz.update!` of the plotter, which happens directly after the Newton iteration. The result for this code looks like this:

Since the computational load of one time step is in this example too low, the plotter would just update all the time and likely never display something, so we artificially increase the load of one time step by
`sleep`ing for 0.1s.
If you don't need the full viewer as a live plot, you can of course call instead `solutionplot` (or any other plot/plot combination) with appropriate keyword arguments to only have a specific live plot.
This can be beneficial performancewise.
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | docs | 1010 | # Developer Documentation
Note that these functions could be removed or change in behavior between minor version changes! Use and dispatch on these with care!
```@docs
FerriteViz.num_vertices
FerriteViz.vertices
FerriteViz.transfer_quadrature_face_to_cell
FerriteViz.decompose!(coord_offset, coord_matrix, ref_coord_matrix, triangle_offset, triangle_matrix, grid, cell::Union{FerriteViz.Ferrite.AbstractCell{2,N,3}, FerriteViz.Ferrite.AbstractCell{3,3,1}}) where {N}
FerriteViz.decompose!(coord_offset, coord_matrix::Vector{Point{space_dim,T}}, ref_coord_matrix, triangle_offset, triangle_matrix, grid, cell::Union{FerriteViz.Ferrite.AbstractCell{2,N,4}, FerriteViz.Ferrite.AbstractCell{3,4,1}}) where {N,space_dim,T}
FerriteViz.decompose!(coord_offset, coord_matrix, ref_coord_matrix, triangle_offset, triangle_matrix, grid, cell::FerriteViz.Ferrite.AbstractCell{3,N,M}) where {N,M}
FerriteViz.transfer_solution
FerriteViz.postprocess
FerriteViz._tensorsjl_gradient_accessor
FerriteViz.linear_face_cell
```
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | docs | 2462 | # FerriteViz.jl
FerriteViz.jl is a small package to visualize your Ferrite.jl results. Currently all Makie backends are supported and thus,
you can visualize your results in a GLMakie window, inside Pluto/Jupyter notebooks via WGLMakie and produce nice vector graphics with
CairoMakie.
In the future this package tries to adapt also other plotting packages, such as Plots.jl and PGFPlotsX.jl. Contributions are highly welcome.
## Getting Started
Install FerriteViz.jl with the in-built package manager of Julia
```julia
pkg> add FerriteViz
```
Do your computation with Ferrite.jl and save the used `DofHandler` and solution vector into a variable. Pass those two variables into
the `MakiePlotter` constructor
```julia
plotter = MakiePlotter(dh,u)
```
Now, you can use `solutionplot`, `wireframe`, `arrows`, `surface` or the viewer via `ferriteviewer`.
Note that the mutating `solutionplot!`, `wireframe!`, `arrows!` and `surface!` are available as well.
## Unique features
This package offers a set of unique features that are not easily reproducible with other export options of Ferrite.jl:
- [`FerriteViz.solutionplot`](@ref) FE solution contour plot on arbitrary finite element mesh (in Makie called `mesh` plots)
- [`FerriteViz.ferriteviewer`](@ref) viewer with toggles and menus that update the plot
- [`FerriteViz.wireframe`](@ref) plots the finite element mesh and optionally labels nodes and cells
- [`FerriteViz.arrows`](@ref) - also called `quiver` plots, in paraview `glyph` filter
- [`FerriteViz.surface`](@ref) 2D solutions in 3D space as surface, in paraview `warp by scalar` filter
- synchronous plotting while your simulation runs with any of the above listed options
- mutating versions of the above listed functions (except for the viewer)
- deformed plots available for `solutionplot` and `wireframe` with linear geometry
- full integration into the Makie ecosystem, e.g. themes, layouts etc.
- GPU powered plotting with GLMakie.jl, jupyter/pluto notebook plotting with WGLMakie.jl and vector graphics with CairoMakie.jl
- visualization of high order solutions via first order refinement
- visualization of non-conforming solutions, e.g. for Crouzeix-Raviart ansatz
## Viewing the docs locally
To view the docs locally use the provided live server:
```julia
include("docs/liveserver.jl")
```
Opening the html files in the browser directly might fail with a CORS error, manifesting itself figures which don't render correctly.
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"MIT"
] | 0.2.2 | 4277f6a13ef32d1751e1354843bde862dc51a508 | docs | 4655 | # Tutorial
## Solve a Boundary Value Problem
Start with solving a boundary value problem as you would usually do with Ferrite. It is crucial that you safe your used DofHandler
and solution vector because we need to pass those objects to `MakiePlotter`.
## Basics
!!! tip "Plotting Functions"
Currently, [`FerriteViz.solutionplot`](@ref), [`FerriteViz.wireframe`](@ref), [`FerriteViz.surface`](@ref), [`FerriteViz.arrows`](@ref) and their mutating analogues with `!` are defined for `MakiePlotter`.
Due to the nature of the documentation we need `WGLMakie`, however, you can simply exchange any `WGLMakie` call by `GLMakie`.
### Mesh utilities
```@example 1
import JSServe # hide
JSServe.Page() # hide
```
You can start by plotting your mesh
```@example 1
import FerriteViz
using Ferrite
import WGLMakie #activating the backend, switch to GLMakie or CairoMakie (for 2D) locally
WGLMakie.set_theme!(resolution=(800, 400)) # hide
grid = generate_grid(Hexahedron,(3,3,3))
FerriteViz.wireframe(grid,markersize=10,strokewidth=2)
```
FerriteViz.jl also supports showing labels for `Ferrite.AbstractGrid` entities, such as node- and celllabels, as well as plotting cellsets.
```@example 1
grid = generate_grid(Quadrilateral,(3,3))
addcellset!(grid,"s1",Set((1,4,7)))
addcellset!(grid,"s2",Set((2,5,8)))
addcellset!(grid,"s3",Set((3,6,9)))
FerriteViz.wireframe(grid,markersize=10,strokewidth=1,nodelabels=true,celllabels=true,cellsets=true)
```
### Solution field of a boundary value problem
If you solve some boundary value problem with Ferrite.jl keep in mind to safe your `dh::DofHandler` and solution vector `u::Vector{T}` in some variable.
With them, we create the `MakiePlotter` struct that dispatches on the plotting functions.
```@example 1
include("ferrite-examples/incompressible-elasticity.jl") #defines variables dh_quadratic and u_quadratic
plotter = FerriteViz.MakiePlotter(dh_quadratic,u_quadratic)
FerriteViz.arrows(plotter)
```
Per default, all plotting functions grab the first field in the `DofHandler`, but of course you can plot a different field as well.
The next plot will show the pressure instead of the displacement
```@example 1
FerriteViz.solutionplot(plotter,field=:p)
```
For certain 2D problems it makes sense to visualize the result as a `surface` plot. To showcase the combination with the mutating versions of the plotting functions,
the `solutionplot` function is plotted below the `surface` plot
```@example 1
FerriteViz.surface(plotter)
FerriteViz.solutionplot!(plotter,colormap=:magma)
WGLMakie.current_figure()
```
### Deformed mesh for mechanical boundary value problem
However, in structural mechanics we often would like to see the deformed configuration,
which can be achieved by providing a `deformation_field::Symbol` as a keyword argument.
```@example 1
include("ferrite-examples/plasticity.jl") #only defines solving function
u, dh, uhistory, σ, κ = solve()
plotter = FerriteViz.MakiePlotter(dh,u)
FerriteViz.solutionplot(plotter,colormap=:thermal,deformation_field=:u)
FerriteViz.wireframe!(plotter,deformation_field=:u,markersize=10,strokewidth=1)
WGLMakie.current_figure()
```
### Showing per-cell data
FerriteViz.jl also supports to plot cell data, such as the **averaged** von-Mises stress or the drag stress of the plasticity example.
```@example 1
u, dh, uhistory, σ, κ = solve()
FerriteViz.cellplot(plotter,σ,colormap=:thermal,deformation_field=:u,deformation_scale=2.0)
FerriteViz.wireframe!(plotter,deformation_field=:u,markersize=10,strokewidth=1,deformation_scale=2.0)
WGLMakie.current_figure()
```
For a more granular investigation of the stress field consult the advanced tutorial.
### Interior of a 3D domain
For 3D problems we can also inspect the interior of the domain. Currenly we only have crinkle clipping
implemented and it can be used as follows:
```@example 1
clip_plane = FerriteViz.ClipPlane(Vec((0.0,0.5,0.5)), 0.7)
clipped_plotter = FerriteViz.crinkle_clip(plotter, clip_plane)
FerriteViz.solutionplot(clipped_plotter,deformation_field=:u,colormap=:thermal,deformation_scale=2.0)
WGLMakie.current_figure()
```
Note that we can replace the plane withs some other object or a decision function. Such a function takes
the grid and a cell index as input and returns a boolean which decides whether a cell is visible or not.
### What's next?
Further, this package provides an interactive viewer that you can call with `ferriteviewer(plotter)` and
`ferriteviewer(plotter,u_history)` for time dependent views, respectively.
If you want to live plot your solution while solving some finite element system, consider to take a look at the advanced topics page.
| FerriteViz | https://github.com/Ferrite-FEM/FerriteViz.jl.git |
|
[
"Apache-2.0"
] | 0.2.0 | f02458b04bd907651063c380900ef2e1c9941acc | code | 109 | module OHDSICohortExpressions
include("datamodel.jl")
include("expression.jl")
include("translate.jl")
end
| OHDSICohortExpressions | https://github.com/MechanicalRabbit/OHDSICohortExpressions.jl.git |
|
[
"Apache-2.0"
] | 0.2.0 | f02458b04bd907651063c380900ef2e1c9941acc | code | 38536 |
using FunSQL: SQLTable
struct Model
attribute_definition::Union{SQLTable, Nothing}
care_site::Union{SQLTable, Nothing}
cdm_source::Union{SQLTable, Nothing}
cohort::Union{SQLTable, Nothing}
cohort_attribute::Union{SQLTable, Nothing}
cohort_definition::Union{SQLTable, Nothing}
concept::Union{SQLTable, Nothing}
concept_ancestor::Union{SQLTable, Nothing}
concept_class::Union{SQLTable, Nothing}
concept_relationship::Union{SQLTable, Nothing}
concept_synonym::Union{SQLTable, Nothing}
condition_era::Union{SQLTable, Nothing}
condition_occurrence::Union{SQLTable, Nothing}
cost::Union{SQLTable, Nothing}
death::Union{SQLTable, Nothing}
device_exposure::Union{SQLTable, Nothing}
domain::Union{SQLTable, Nothing}
dose_era::Union{SQLTable, Nothing}
drug_era::Union{SQLTable, Nothing}
drug_exposure::Union{SQLTable, Nothing}
drug_strength::Union{SQLTable, Nothing}
fact_relationship::Union{SQLTable, Nothing}
location::Union{SQLTable, Nothing}
measurement::Union{SQLTable, Nothing}
metadata::Union{SQLTable, Nothing}
note::Union{SQLTable, Nothing}
note_nlp::Union{SQLTable, Nothing}
observation::Union{SQLTable, Nothing}
observation_period::Union{SQLTable, Nothing}
payer_plan_period::Union{SQLTable, Nothing}
person::Union{SQLTable, Nothing}
procedure_occurrence::Union{SQLTable, Nothing}
provider::Union{SQLTable, Nothing}
relationship::Union{SQLTable, Nothing}
source_to_concept_map::Union{SQLTable, Nothing}
specimen::Union{SQLTable, Nothing}
visit_detail::Union{SQLTable, Nothing}
visit_occurrence::Union{SQLTable, Nothing}
vocabulary::Union{SQLTable, Nothing}
function Model(;
cdm_version = nothing,
cdm_schema = nothing,
vocabulary_schema = nothing,
results_schema = nothing,
target_schema = nothing,
target_table = nothing)
cdm_qualifiers = vocabulary_qualifiers = target_qualifiers = Symbol[]
if cdm_schema !== nothing
cdm_qualifiers = Symbol[cdm_schema]
end
if vocabulary_schema !== nothing
vocabulary_qualifiers = Symbol[vocabulary_schema]
end
if target_schema !== nothing
target_qualifiers = Symbol[target_schema]
end
cdm_version = something(cdm_version, v"5.3.1")
cdm_version = typeof(cdm_version) == VersionNumber ?
cdm_version : VersionNumber(cdm_version)
@assert v"5.2" <= cdm_version < v"5.4"
attribute_definition =
SQLTable(qualifiers = cdm_qualifiers,
name = :attribute_definition,
columns = [:attribute_definition_id,
:attribute_name,
:attribute_description,
:attribute_type_concept_id,
:attribute_syntax])
care_site =
SQLTable(qualifiers = cdm_qualifiers,
name = :care_site,
columns = [:care_site_id,
:care_site_name,
:place_of_service_concept_id,
:location_id,
:care_site_source_value,
:place_of_service_source_value])
cdm_source =
SQLTable(qualifiers = cdm_qualifiers,
name = :cdm_source,
columns = [:cdm_source_name,
:cdm_source_abbreviation,
:cdm_holder,
:source_description,
:source_documentation_reference,
:cdm_etl_reference,
:source_release_date,
:cdm_release_date,
:cdm_version,
:vocabulary_version])
cohort =
SQLTable(qualifiers = target_qualifiers,
name = something(target_table, :cohort),
columns = [:cohort_definition_id,
:subject_id,
:cohort_start_date,
:cohort_end_date])
cohort_attribute =
SQLTable(qualifiers = target_qualifiers,
name = :cohort_attribute,
columns = [:cohort_definition_id,
:subject_id,
:cohort_start_date,
:cohort_end_date,
:attribute_definition_id,
:value_as_number,
:value_as_concept_id])
cohort_definition =
SQLTable(qualifiers = target_qualifiers,
name = :cohort_definition,
columns = [:cohort_definition_id,
:cohort_definition_name,
:cohort_definition_description,
:definition_type_concept_id,
:cohort_definition_syntax,
:subject_concept_id,
:cohort_initiation_date])
concept =
SQLTable(qualifiers = vocabulary_qualifiers,
name = :concept,
columns = [:concept_id,
:concept_name,
:domain_id,
:vocabulary_id,
:concept_class_id,
:standard_concept,
:concept_code,
:valid_start_date,
:valid_end_date,
:invalid_reason])
concept_ancestor =
SQLTable(qualifiers = vocabulary_qualifiers,
name = :concept_ancestor,
columns = [:ancestor_concept_id,
:descendant_concept_id,
:min_levels_of_separation,
:max_levels_of_separation])
concept_class =
SQLTable(qualifiers = vocabulary_qualifiers,
name = :concept_class,
columns = [:concept_class_id,
:concept_class_name,
:concept_class_concept_id])
concept_relationship =
SQLTable(qualifiers = vocabulary_qualifiers,
name = :concept_relationship,
columns = [:concept_id_1,
:concept_id_2,
:relationship_id,
:valid_start_date,
:valid_end_date,
:invalid_reason])
concept_synonym =
SQLTable(qualifiers = vocabulary_qualifiers,
name = :concept_synonym,
columns = [:concept_id,
:concept_synonym_name,
:language_concept_id])
condition_era =
SQLTable(qualifiers = cdm_qualifiers,
name = :condition_era,
columns = [:condition_era_id,
:person_id,
:condition_concept_id,
:condition_era_start_date,
:condition_era_end_date,
:condition_occurrence_count])
condition_occurrence =
if cdm_version < v"5.3"
SQLTable(qualifiers = cdm_qualifiers,
name = :condition_occurrence,
columns = [:condition_occurrence_id,
:person_id,
:condition_concept_id,
:condition_start_date,
:condition_start_datetime,
:condition_end_date,
:condition_end_datetime,
:condition_type_concept_id,
:stop_reason,
:provider_id,
:visit_occurrence_id,
:condition_source_value,
:condition_source_concept_id,
:condition_status_source_value,
:condition_status_concept_id])
else
SQLTable(qualifiers = cdm_qualifiers,
name = :condition_occurrence,
columns = [:condition_occurrence_id,
:person_id,
:condition_concept_id,
:condition_start_date,
:condition_start_datetime,
:condition_end_date,
:condition_end_datetime,
:condition_type_concept_id,
:stop_reason,
:provider_id,
:visit_occurrence_id,
:visit_detail_id,
:condition_source_value,
:condition_source_concept_id,
:condition_status_source_value,
:condition_status_concept_id])
end
cost =
SQLTable(qualifiers = cdm_qualifiers,
name = :cost,
columns = [:cost_id,
:cost_event_id,
:cost_domain_id,
:cost_type_concept_id,
:currency_concept_id,
:total_charge,
:total_cost,
:total_paid,
:paid_by_payer,
:paid_by_patient,
:paid_patient_copay,
:paid_patient_coinsurance,
:paid_patient_deductible,
:paid_by_primary,
:paid_ingredient_cost,
:paid_dispensing_fee,
:payer_plan_period_id,
:amount_allowed,
:revenue_code_concept_id,
:reveue_code_source_value,
:drg_concept_id,
:drg_source_value])
death =
SQLTable(qualifiers = cdm_qualifiers,
name = :death,
columns = [:person_id,
:death_date,
:death_datetime,
:death_type_concept_id,
:cause_concept_id,
:cause_source_value,
:cause_source_concept_id])
device_exposure =
if cdm_version < v"5.3"
SQLTable(qualifiers = cdm_qualifiers,
name = :device_exposure,
columns = [:device_exposure_id,
:person_id,
:device_concept_id,
:device_exposure_start_date,
:device_exposure_start_datetime,
:device_exposure_end_date,
:device_exposure_end_datetime,
:device_type_concept_id,
:unique_device_id,
:quantity,
:provider_id,
:visit_occurrence_id,
:device_source_value,
:device_source_concept_id])
else
SQLTable(qualifiers = cdm_qualifiers,
name = :device_exposure,
columns = [:device_exposure_id,
:person_id,
:device_concept_id,
:device_exposure_start_date,
:device_exposure_start_datetime,
:device_exposure_end_date,
:device_exposure_end_datetime,
:device_type_concept_id,
:unique_device_id,
:quantity,
:provider_id,
:visit_occurrence_id,
:visit_detail_id,
:device_source_value,
:device_source_concept_id])
end
domain =
SQLTable(qualifiers = vocabulary_qualifiers,
name = :domain,
columns = [:domain_id,
:domain_name,
:domain_concept_id])
dose_era =
SQLTable(qualifiers = cdm_qualifiers,
name = :dose_era,
columns = [:dose_era_id,
:person_id,
:drug_concept_id,
:unit_concept_id,
:dose_value,
:dose_era_start_date,
:dose_era_end_date])
drug_era =
SQLTable(qualifiers = cdm_qualifiers,
name = :drug_era,
columns = [:drug_era_id,
:person_id,
:drug_concept_id,
:drug_era_start_date,
:drug_era_end_date,
:drug_exposure_count,
:gap_days])
drug_exposure =
if cdm_version < v"5.3"
SQLTable(qualifiers = cdm_qualifiers,
name = :drug_exposure,
columns = [:drug_exposure_id,
:person_id,
:drug_concept_id,
:drug_exposure_start_date,
:drug_exposure_start_datetime,
:drug_exposure_end_date,
:drug_exposure_end_datetime,
:verbatim_end_date,
:drug_type_concept_id,
:stop_reason,
:refills,
:quantity,
:days_supply,
:sig,
:route_concept_id,
:lot_number,
:provider_id,
:visit_occurrence_id,
:drug_source_value,
:drug_source_concept_id,
:route_source_value,
:dose_unit_source_value])
else
SQLTable(qualifiers = cdm_qualifiers,
name = :drug_exposure,
columns = [:drug_exposure_id,
:person_id,
:drug_concept_id,
:drug_exposure_start_date,
:drug_exposure_start_datetime,
:drug_exposure_end_date,
:drug_exposure_end_datetime,
:verbatim_end_date,
:drug_type_concept_id,
:stop_reason,
:refills,
:quantity,
:days_supply,
:sig,
:route_concept_id,
:lot_number,
:provider_id,
:visit_occurrence_id,
:visit_detail_id,
:drug_source_value,
:drug_source_concept_id,
:route_source_value,
:dose_unit_source_value])
end
drug_strength =
SQLTable(qualifiers = vocabulary_qualifiers,
name = :drug_strength,
columns = [:drug_concept_id,
:ingredient_concept_id,
:amount_value,
:amount_unit_concept_id,
:numerator_value,
:numerator_unit_concept_id,
:denominator_value,
:denominator_unit_concept_id,
:box_size,
:valid_start_date,
:valid_end_date,
:invalid_reason])
fact_relationship =
SQLTable(qualifiers = vocabulary_qualifiers,
name = :fact_relationship,
columns = [:domain_concept_id_1,
:fact_id_1,
:domain_concept_id_2,
:fact_id_2,
:relationship_concept_id])
location =
SQLTable(qualifiers = vocabulary_qualifiers,
name = :location,
columns = [:location_id,
:address_1,
:address_2,
:city,
:state,
:zip,
:county,
:location_source_value])
measurement =
if cdm_version < v"5.3"
SQLTable(qualifiers = cdm_qualifiers,
name = :measurement,
columns = [:measurement_id,
:person_id,
:measurement_concept_id,
:measurement_date,
:measurement_datetime,
:measurement_type_concept_id,
:operator_concept_id,
:value_as_number,
:value_as_concept_id,
:unit_concept_id,
:range_low,
:range_high,
:provider_id,
:visit_occurrence_id,
:measurement_source_value,
:measurement_source_concept_id,
:unit_source_value,
:value_source_value])
else
SQLTable(qualifiers = cdm_qualifiers,
name = :measurement,
columns = [:measurement_id,
:person_id,
:measurement_concept_id,
:measurement_date,
:measurement_datetime,
:measurement_time,
:measurement_type_concept_id,
:operator_concept_id,
:value_as_number,
:value_as_concept_id,
:unit_concept_id,
:range_low,
:range_high,
:provider_id,
:visit_occurrence_id,
:visit_detail_id,
:measurement_source_value,
:measurement_source_concept_id,
:unit_source_value,
:value_source_value])
end
metadata =
if cdm_version < v"5.3"
nothing
else
SQLTable(qualifiers = cdm_qualifiers,
name = :metadata,
columns = [:metadata_concept_id,
:metadata_type_concept_id,
:name,
:value_as_string,
:value_as_concept_id,
:metadata_date,
:metadata_datetime])
end
note =
if cdm_version < v"5.3"
SQLTable(qualifiers = cdm_qualifiers,
name = :note,
columns = [:note_id,
:person_id,
:note_date,
:note_datetime,
:note_type_concept_id,
:note_class_concept_id,
:note_title,
:note_text,
:encoding_concept_id,
:language_concept_id,
:provider_id,
:visit_occurrence_id,
:note_source_value])
else
SQLTable(qualifiers = cdm_qualifiers,
name = :note,
columns = [:note_id,
:person_id,
:note_date,
:note_datetime,
:note_type_concept_id,
:note_class_concept_id,
:note_title,
:note_text,
:encoding_concept_id,
:language_concept_id,
:provider_id,
:visit_occurrence_id,
:visit_detail_id,
:note_source_value])
end
note_nlp =
SQLTable(qualifiers = cdm_qualifiers,
name = :note_nlp,
columns = [:note_nlp_id,
:note_id,
:section_concept_id,
:snippet,
:offset,
:lexical_variant,
:note_nlp_concept_id,
:note_nlp_source_concept_id,
:nlp_system,
:nlp_date,
:nlp_datetime,
:term_exists,
:term_temporal,
:term_modifiers])
observation =
if cdm_version < v"5.3"
SQLTable(qualifiers = cdm_qualifiers,
name = :observation,
columns = [:observation_id,
:person_id,
:observation_concept_id,
:observation_date,
:observation_datetime,
:observation_type_concept_id,
:value_as_number,
:value_as_string,
:value_as_concept_id,
:qualifier_concept_id,
:unit_concept_id,
:provider_id,
:visit_occurrence_id,
:observation_source_value,
:observation_source_concept_id,
:unit_source_value,
:qualifier_source_value])
else
SQLTable(qualifiers = cdm_qualifiers,
name = :observation,
columns = [:observation_id,
:person_id,
:observation_concept_id,
:observation_date,
:observation_datetime,
:observation_type_concept_id,
:value_as_number,
:value_as_string,
:value_as_concept_id,
:qualifier_concept_id,
:unit_concept_id,
:provider_id,
:visit_occurrence_id,
:visit_detail_id,
:observation_source_value,
:observation_source_concept_id,
:unit_source_value,
:qualifier_source_value])
end
observation_period =
SQLTable(qualifiers = cdm_qualifiers,
name = :observation_period,
columns = [:observation_period_id,
:person_id,
:observation_period_start_date,
:observation_period_end_date,
:period_type_concept_id])
payer_plan_period =
if cdm_version < v"5.3"
SQLTable(qualifiers = cdm_qualifiers,
name = :payer_plan_period,
columns = [:payer_plan_period_id,
:person_id,
:payer_plan_period_start_date,
:payer_plan_period_end_date,
:payer_source_value,
:plan_source_value,
:family_source_value])
else
SQLTable(qualifiers = cdm_qualifiers,
name = :payer_plan_period,
columns = [:payer_plan_period_id,
:person_id,
:payer_plan_period_start_date,
:payer_plan_period_end_date,
:payer_concept_id,
:payer_source_value,
:payer_source_concept_id,
:plan_concept_id,
:plan_source_value,
:plan_source_concept_id,
:sponsor_concept_id,
:sponsor_source_value,
:sponsor_source_concept_id,
:family_source_value,
:stop_reason_concept_id,
:stop_reason_source_value,
:stop_reason_source_concept_id])
end
person =
SQLTable(qualifiers = cdm_qualifiers,
name = :person,
columns = [:person_id,
:gender_concept_id,
:year_of_birth,
:month_of_birth,
:day_of_birth,
:birth_datetime,
:race_concept_id,
:ethnicity_concept_id,
:location_id,
:provider_id,
:care_site_id,
:person_source_value,
:gender_source_value,
:gender_source_concept_id,
:race_source_value,
:race_source_concept_id,
:ethnicity_source_value,
:ethnicity_source_concept_id])
procedure_occurrence =
if cdm_version < v"5.3"
SQLTable(:procedure_occurrence,
columns = [:procedure_occurrence_id,
:person_id,
:procedure_concept_id,
:procedure_date,
:procedure_datetime,
:procedure_type_concept_id,
:modifier_concept_id,
:quantity,
:provider_id,
:visit_occurrence_id,
:procedure_source_value,
:procedure_source_concept_id,
:qualifier_source_value])
else
SQLTable(:procedure_occurrence,
columns = [:procedure_occurrence_id,
:person_id,
:procedure_concept_id,
:procedure_date,
:procedure_datetime,
:procedure_type_concept_id,
:modifier_concept_id,
:quantity,
:provider_id,
:visit_occurrence_id,
:visit_detail_id,
:procedure_source_value,
:procedure_source_concept_id,
:modifier_source_value])
end
provider =
SQLTable(qualifiers = cdm_qualifiers,
name = :provider,
columns = [:provider_id,
:provider_name,
:npi,
:dea,
:specialty_concept_id,
:care_site_id,
:year_of_birth,
:gender_concept_id,
:provider_source_value,
:specialty_source_value,
:specialty_source_concept_id,
:gender_source_value,
:gender_source_concept_id])
relationship =
SQLTable(qualifiers = vocabulary_qualifiers,
name = :relationship,
columns = [:relationship_id,
:relationship_name,
:is_hierarchical,
:defines_ancestry,
:reverse_relationship_id,
:relationship_concept_id])
source_to_concept_map =
SQLTable(qualifiers = vocabulary_qualifiers,
name = :source_to_concept_map,
columns = [:source_code,
:source_concept_id,
:source_vocabulary_id,
:source_code_description,
:target_concept_id,
:target_vocabulary_id,
:valid_start_date,
:valid_end_date,
:invalid_reason])
specimen =
SQLTable(qualifiers = cdm_qualifiers,
name = :specimen,
columns = [:specimen_id,
:person_id,
:specimen_concept_id,
:specimen_type_concept_id,
:specimen_date,
:specimen_datetime,
:quantity,
:unit_concept_id,
:anatomic_site_concept_id,
:disease_status_concept_id,
:specimen_source_id,
:specimen_source_value,
:unit_source_value,
:anatomic_site_source_value,
:disease_status_source_value])
visit_detail =
if cdm_version < v"5.3"
nothing
else
SQLTable(qualifiers = cdm_qualifiers,
name = :visit_detail,
columns = [:visit_detail_id,
:person_id,
:visit_detail_concept_id,
:visit_detail_start_date,
:visit_detail_start_datetime,
:visit_detail_end_date,
:visit_detail_end_datetime,
:visit_detail_type_concept_id,
:provider_id,
:care_site_id,
:admitting_source_concept_id,
:discharge_to_concept_id,
:preceding_visit_detail_id,
:visit_detail_source_value,
:visit_detail_source_concept_id,
:admitting_source_value,
:discharge_to_source_value,
:visit_detail_parent_id,
:visit_occurrence_id])
end
visit_occurrence =
SQLTable(qualifiers = cdm_qualifiers,
name = :visit_occurrence,
columns = [:visit_occurrence_id,
:person_id,
:visit_concept_id,
:visit_start_date,
:visit_start_datetime,
:visit_end_date,
:visit_end_datetime,
:visit_type_concept_id,
:provider_id,
:care_site_id,
:visit_source_value,
:visit_source_concept_id,
:admitting_source_concept_id,
:admitting_source_value,
:discharge_to_concept_id,
:discharge_to_source_value,
:preceding_visit_occurrence_id])
vocabulary =
SQLTable(qualifiers = vocabulary_qualifiers,
name = :vocabulary,
columns = [:vocabulary_id,
:vocabulary_name,
:vocabulary_reference,
:vocabulary_version,
:vocabulary_concept_id])
new(attribute_definition,
care_site,
cdm_source,
cohort,
cohort_attribute,
cohort_definition,
concept,
concept_ancestor,
concept_class,
concept_relationship,
concept_synonym,
condition_era,
condition_occurrence,
cost,
death,
device_exposure,
domain,
dose_era,
drug_era,
drug_exposure,
drug_strength,
fact_relationship,
location,
measurement,
metadata,
note,
note_nlp,
observation,
observation_period,
payer_plan_period,
person,
procedure_occurrence,
provider,
relationship,
source_to_concept_map,
specimen,
visit_detail,
visit_occurrence,
vocabulary)
end
end
| OHDSICohortExpressions | https://github.com/MechanicalRabbit/OHDSICohortExpressions.jl.git |
|
[
"Apache-2.0"
] | 0.2.0 | f02458b04bd907651063c380900ef2e1c9941acc | code | 23324 | using Dates
using PrettyPrinting: PrettyPrinting, @isexpr
import Base: isempty, parse
macro unpack(ex)
if @isexpr ex Expr(:struct, mut::Bool, decl, Expr(:block, args...))
if @isexpr decl Expr(:(<:), T, _)
else
T = decl
end
struct_slots = Any[]
ctr_slots = Any[]
new_slots = Any[]
unpack_slots = Any[]
quoteof_slots = Any[]
for arg in args
if arg isa LineNumberNode
push!(struct_slots, arg)
continue
end
if @isexpr arg Expr(:(=), arg′, default)
arg = arg′
has_default = true
else
has_default = false
end
if @isexpr arg Expr(:call, :(=>), key, arg′)
arg = arg′
has_key = true
else
has_key = false
end
if @isexpr arg Expr(:(::), name, FT)
else
error("expected field declaration; got $(repr(arg))")
end
@isexpr FT Expr(:curly, :Union, FT, _)
push!(struct_slots, arg)
push!(ctr_slots, has_default ? Expr(:kw, name, default) : name)
push!(new_slots, name)
unpack_slot = Expr(:call, :unpack!, FT, :data)
if has_key
push!(unpack_slot.args, key)
if has_default
push!(unpack_slot.args, default)
end
end
unpack_slot = Expr(:kw, name, unpack_slot)
push!(unpack_slots, unpack_slot)
quoteof_slot = :(push!(ex.args, Expr(:kw, $(QuoteNode(name)), obj.$name)))
if has_default
quoteof_slot = :(obj.$name == $default || $quoteof_slot)
end
push!(quoteof_slots, quoteof_slot)
end
return quote
struct $decl
$(struct_slots...)
$T(; $(ctr_slots...)) = new($(new_slots...))
end
unpack!(::Type{$T}, data::Dict) = $T($(unpack_slots...))
function PrettyPrinting.quoteof(obj::$T)
ex = Expr(:call, nameof($T))
$(quoteof_slots...)
ex
end
end |> esc
else
error("expected a struct; got $(repr(ex))")
end
end
unpack!(T::Type, data::Dict, key::String, default) =
haskey(data, key) ?
something(unpack!(T, data, key), default) :
default
function unpack!(T::Type, data::Dict, key::String)
bucket = data[key]
retval = unpack!(T, bucket)
if !(bucket isa Union{Dict,Vector}) || isempty(bucket)
delete!(data, key)
end
return retval
end
unpack!(::Type{String}, data) =
data
function unpack!(T::Type{<:Union{Date, Number, Enum}}, data)
if data isa String
return parse(T, data)
end
return T(data)
end
function unpack!(::Type{Vector{T}}, items::Vector{Any}) where {T}
retval = T[]
for item in items
push!(retval, unpack!(T, item))
end
filter!(item -> !isempty(item), items)
return retval
end
@enum RangeOp GT GTE LT LTE EQ NEQ BT NBT
Base.parse(::Type{RangeOp}, s::String) =
s == "gt" ? GT :
s == "gte" ? GTE :
s == "lt" ? LT :
s == "lte" ? LTE :
s == "eq" ? EQ :
s == "!eq" ? NEQ :
s == "bt" ? BT :
s == "!bt" ? NBT :
throw(DomainError(s, "Unknown Range Operation"))
@enum TextOp CONTAINS NCONTAINS STARTSWITH NSTARTSWITH ENDSWITH NENDSWITH
Base.parse(::Type{TextOp}, s::String) =
s == "contains" ? CONTAINS :
s == "!contains" ? NCONTAINS :
s == "startsWith" ? STARTSWITH :
s == "!startsWith" ? NSTARTSWITH :
s == "endsWith" ? ENDSWITH :
s == "!endsWith" ? NENDSWITH :
throw(DomainError(s, "Unknown Text Operation"))
@unpack struct DateRange
"Value" => value::Date
"Op" => op::RangeOp
"Extent" => extent::Union{Date, Nothing} = nothing
end
@unpack struct TextFilter
"Text" => text::String = "null"
"Op" => op::TextOp
end
@unpack struct NumericRange
"Value" => value::Number
"Op" => op::RangeOp
"Extent" => extent::Union{Number, Nothing} = nothing
end
@enum InvalidReasonFlag UNKNOWN_REASON VALID INVALID
InvalidReasonFlag(::Nothing) = UNKNOWN_REASON
Base.parse(::Type{InvalidReasonFlag}, s::Union{String, Nothing}) =
s == "V" ? VALID :
s == "D" ? INVALID :
s == "U" ? INVALID :
isnothing(s) ? UNKNOWN_REASON :
throw(DomainError(s, "Unknown Invalid Reason Flag"))
@enum StandardConceptFlag UNKNOWN_STANDARD STANDARD NON_STANDARD CLASSIFICATION
StandardConceptFlag(::Nothing) = UNKNOWN_STANDARD
Base.parse(::Type{StandardConceptFlag}, s::Union{String, Nothing}) =
s == "N" ? NON_STANDARD :
s == "S" ? STANDARD :
s == "C" ? CLASSIFICATION :
isnothing(s) ? UNKNOWN_STANDARD :
throw(DomainError(s, "Unknown Standard Concept Flag"))
@unpack struct Concept
"CONCEPT_CLASS_ID" => concept_class_id::String = ""
"CONCEPT_CODE" => concept_code::String
"CONCEPT_ID" => concept_id::Int
"CONCEPT_NAME" => concept_name::String
"DOMAIN_ID" => domain_id::String
"INVALID_REASON" => invalid_reason::InvalidReasonFlag = UNKNOWN_REASON
"INVALID_REASON_CAPTION" => invalid_reason_caption::String
"STANDARD_CONCEPT" => standard_concept::StandardConceptFlag = UNKNOWN_STANDARD
"STANDARD_CONCEPT_CAPTION" => standard_concept_caption::String
"VOCABULARY_ID" => vocabulary_id::String
end
abstract type Criteria end
function Base.getproperty(obj::Criteria, prop::Symbol)
if prop in fieldnames(BaseCriteria)
return getfield(obj.base, prop)
else
return getfield(obj, prop)
end
end
@unpack struct Endpoint
"Days" => days::Union{Int, Nothing} = nothing
"Coeff" => coeff::Int
end
@unpack struct Window
"Start" => start::Endpoint
"End" => end_::Endpoint
"UseIndexEnd" => use_index_end::Union{Bool, Nothing} = nothing
"UseEventEnd" => use_event_end::Union{Bool, Nothing} = nothing
end
@enum OccurrenceType EXACTLY=0 AT_MOST=1 AT_LEAST=2
Base.parse(::Type{OccurrenceType}, s::String) =
s == "0" ? EXACTLY :
s == "1" ? AT_MOST :
s == "2" ? AT_LEAST :
throw(DomainError(s, "Unknown Occurrence Type"))
@unpack struct Occurrence
"Type" => type::OccurrenceType
"Count" => count::Int
"IsDistinct" => is_distinct::Bool = false
"CountColumn" => count_column::Union{String, Nothing} = nothing
end
@unpack struct CorrelatedCriteria
"Criteria" => criteria::Union{Criteria, Nothing} = nothing
"EndWindow" => end_window::Union{Window, Nothing} = nothing
"IgnoreObservationPeriod" => ignore_observation_period::Bool = false
"Occurrence" => occurrence::Union{Occurrence, Nothing} = nothing
"RestrictVisit" => restrict_visit::Bool = false
"StartWindow" => start_window::Window
end
@unpack struct DemographicCriteria
"Age" => age::Union{NumericRange, Nothing} = nothing
"Ethnicity" => ethnicity::Vector{Concept} = Concept[]
"Gender" => gender::Vector{Concept} = Concept[]
"OccurrenceEndDate" => occurrence_end_date::Union{DateRange, Nothing} = nothing
"OccurrenceStartDate" => occurrence_start_date::Union{DateRange, Nothing} = nothing
"Race" => race::Vector{Concept} = Concept[]
end
@enum CriteriaGroupType ALL_CRITERIA ANY_CRITERIA AT_LEAST_CRITERIA AT_MOST_CRITERIA
Base.parse(::Type{CriteriaGroupType}, s::String) =
s == "ALL" ? ALL_CRITERIA :
s == "ANY" ? ANY_CRITERIA :
s == "AT_LEAST" ? AT_LEAST_CRITERIA :
s == "AT_MOST" ? AT_MOST_CRITERIA :
throw(DomainError(s, "Unknown Criteria Group Type"))
@unpack struct CriteriaGroup
"Count" => count::Union{Int, Nothing} = nothing
"CriteriaList" => correlated_criteria::Vector{CorrelatedCriteria} = CorrelatedCriteria[]
"DemographicCriteriaList" => demographic_criteria::Vector{DemographicCriteria} = DemographicCriteria[]
"Groups" => groups::Vector{CriteriaGroup} = CriteriaGroup[]
"Type" => type::CriteriaGroupType
end
isempty(g::CriteriaGroup) =
isempty(g.correlated_criteria) &&
isempty(g.demographic_criteria) &&
isempty(g.groups)
@enum CollapseType UNKNOWN_COLLAPSE ERA
CollapseType(::Nothing) = UNKNOWN_COLLAPSE
Base.parse(::Type{CollapseType}, s::Union{String, Nothing}) =
s == "ERA" ? ERA :
isnothing(s) ? UNKNOWN_COLLAPSE :
throw(DomainError(s, "Unknown Collapse Type"))
@unpack struct CollapseSettings
"CollapseType" => collapse_type::CollapseType
"EraPad" => era_pad::Int = 0
end
@unpack struct Period
"StartDate" => start_date::Union{Date, Nothing} = nothing
"EndDate" => end_date::Union{Date, Nothing} = nothing
end
@unpack struct ConceptSetItem
"concept" => concept::Concept
"isExcluded" => is_excluded::Bool = false
"includeDescendants" => include_descendants::Bool = false
"includeMapped" => include_mapped::Bool = false
end
function unpack!(T::Type{Vector{ConceptSetItem}}, data::Dict)
items = data["items"]
retval = unpack!(T, items)
if isempty(items)
delete!(data, "items")
end
return retval
end
@unpack struct ConceptSet
"id" => id::Int
"name" => name::String
"expression" => items::Vector{ConceptSetItem} = ConceptSetItem[]
end
@unpack struct ConceptSetSelection
"CodesetId" => codeset_id::Int
"IsExclusion" => is_exclusion::Bool
end
abstract type EndStrategy end
@unpack struct CustomEraStrategy <: EndStrategy
"DrugCodesetId" => drug_codeset_id::Union{Int, Nothing} = nothing
"GapDays" => gap_days::Int = 0
"Offset" => offset::Int = 0
"DaysSupplyOverride" => days_supply_override::Union{Int, Nothing} = nothing
end
@enum DateField START_DATE END_DATE
Base.parse(::Type{DateField}, s::String) =
s == "StartDate" ? START_DATE :
s == "EndDate" ? END_DATE :
throw(DomainError(s, "Unknown Date Field"))
@unpack struct DateOffsetStrategy <: EndStrategy
"Offset" => offset::Integer
"DateField" => date_field::DateField
end
function unpack!(::Type{EndStrategy}, data::Dict)
if haskey(data, "DateOffset")
(key, type) = ("DateOffset", DateOffsetStrategy)
else
(key, type) = ("CustomEra", CustomEraStrategy)
end
subdata = data[key]
retval = unpack!(type, subdata)
if isempty(subdata)
delete!(data, key)
end
return retval
end
@unpack struct InclusionRule
"name" => name::String
"description" => description::String = ""
"expression" => expression::CriteriaGroup
end
@unpack struct ObservationFilter
"PriorDays" => prior_days::Int = 0
"PostDays" => post_days::Int = 0
end
@enum ResultLimitType FIRST LAST ALL
Base.parse(::Type{ResultLimitType}, s::Union{String, Nothing}) =
s == "First" ? FIRST :
s == "Last" ? LAST :
s == "All" ? ALL :
isnothing(s) ? FIRST :
throw(DomainError(s, "Unknown Result Limit Type"))
@unpack struct ResultLimit
"Type" => type::ResultLimitType = FIRST
end
@unpack struct PrimaryCriteria
"CriteriaList" => criteria_list::Vector{Criteria}
"ObservationWindow" => observation_window::ObservationFilter
"PrimaryCriteriaLimit" => primary_limit::ResultLimit
end
@unpack struct BaseCriteria
"Age" => age::Union{NumericRange, Nothing} = nothing
"CodesetId" => codeset_id::Union{Int, Nothing} = nothing
"CorrelatedCriteria" => correlated_criteria::Union{CriteriaGroup, Nothing} = nothing
"First" => first::Bool = false
"Gender" => gender::Vector{Concept} = Concept[]
"OccurrenceEndDate" => occurrence_end_date::Union{DateRange, Nothing} = nothing
"OccurrenceStartDate" => occurrence_start_date::Union{DateRange, Nothing} = nothing
"ProviderSpecialty" => provider_specialty::Vector{Concept} = Concept[]
"VisitType" => visit_type::Vector{Concept} = Concept[]
end
struct UnknownCriteria <: Criteria
end
unpack!(::Type{UnknownCriteria}, data::Dict) = UnknownCriteria()
PrettyPrinting.quoteof(obj::UnknownCriteria) =
Expr(:call, nameof(UnknownCriteria))
@unpack struct ConditionEra <: Criteria
# like DrugEra, but missing gap_length?
base::BaseCriteria
"EraEndDate" => era_end_date::Union{DateRange, Nothing} = nothing
"EraStartDate" => era_start_date::Union{DateRange, Nothing} = nothing
"EraLength" => era_length::Union{NumericRange, Nothing} = nothing
"OccurrenceCount" => occurrence_count::Union{NumericRange, Nothing} = nothing
"AgeAtStart" => age_at_start::Union{NumericRange, Nothing} = nothing
"AgeAtEnd" => age_at_end::Union{NumericRange, Nothing} = nothing
end
@unpack struct ConditionOccurrence <: Criteria
base::BaseCriteria
"ConditionSourceConcept" => condition_source_concept::Union{Int, Nothing} = nothing
"ConditionStatus" => condition_status::Vector{Concept} = Concept[]
"ConditionType" => condition_type::Vector{Concept} = Concept[]
"ConditionTypeExclude" => condition_type_exclude::Bool = false
"StopReason" => stop_reason::Union{TextFilter, Nothing} = nothing
end
@unpack struct Death <: Criteria
base::BaseCriteria
"DeathSourceConcept" => death_source_concept::Union{Int, Nothing} = nothing
"DeathType" => death_type::Vector{Concept} = Concept[]
"DeathTypeExclude" => death_type_exclude::Bool = false
end
@unpack struct DeviceExposure <: Criteria
base::BaseCriteria
"DeviceSourceConcept" => device_source_concept::Union{Int, Nothing} = nothing
"DeviceType" => device_type::Vector{Concept} = Concept[]
"DeviceTypeExclude" => device_type_exclude::Bool = false
"Quantity" => quantity::Union{NumericRange, Nothing} = nothing
"UniqueDeviceId" => unique_device_id::Union{TextFilter, Nothing} = nothing
end
@unpack struct DrugEra <: Criteria
base::BaseCriteria
"EraEndDate" => era_end_date::Union{DateRange, Nothing} = nothing
"EraStartDate" => era_start_date::Union{DateRange, Nothing} = nothing
"EraLength" => era_length::Union{NumericRange, Nothing} = nothing
"OccurrenceCount" => occurrence_count::Union{NumericRange, Nothing} = nothing
"GapDays" => gap_days::Union{NumericRange, Nothing} = nothing
"AgeAtStart" => age_at_start::Union{NumericRange, Nothing} = nothing
"AgeAtEnd" => age_at_end::Union{NumericRange, Nothing} = nothing
end
@unpack struct DrugExposure <: Criteria
base::BaseCriteria
"DrugSourceConcept" => drug_source_concept::Union{Int, Nothing} = nothing
"DrugType" => drug_type::Vector{Concept} = Concept[]
"DrugTypeExclude" => drug_type_exclude::Bool = false
"Refills" => refills::Union{NumericRange, Nothing} = nothing
"Quantity" => quantity::Union{NumericRange, Nothing} = nothing
"DaysSupply" => days_supply::Union{NumericRange, Nothing} = nothing
"RouteConcept" => route_concept::Vector{Concept} = Concept[]
"EffectiveDrugDose" => effective_drug_dose::Union{NumericRange, Nothing} = nothing
"DoseUnit" => dose_unit::Vector{Concept} = Concept[]
"LotNumber" => lot_number::Union{TextFilter, Nothing} = nothing
"StopReason" => stop_reason::Union{TextFilter, Nothing} = nothing
end
@unpack struct DoseEra <: Criteria
base::BaseCriteria
"DoseValue" => dose_value::Union{NumericRange, Nothing} = nothing
"EraEndDate" => era_end_date::Union{DateRange, Nothing} = nothing
"EraStartDate" => era_start_date::Union{DateRange, Nothing} = nothing
"EraLength" => era_length::Union{NumericRange, Nothing} = nothing
"AgeAtStart" => age_at_start::Union{NumericRange, Nothing} = nothing
"AgeAtEnd" => age_at_end::Union{NumericRange, Nothing} = nothing
"Unit" => unit::Vector{Concept} = Concept[]
end
@unpack struct LocationRegion <: Criteria
"CodesetId" => codeset_id::Union{Int, Nothing} = nothing
"StartDate" => start_date::Union{DateRange, Nothing} = nothing
"EndDate" => end_date::Union{DateRange, Nothing} = nothing
end
Base.getproperty(obj::LocationRegion, prop::Symbol) =
getfield(obj, prop)
@unpack struct Measurement <: Criteria
base::BaseCriteria
"MeasurementSourceConcept" => measurement_source_concept::Union{Int, Nothing} = nothing
"MeasurementType" => measurement_type::Vector{Concept} = Concept[]
"MeasurementTypeExclude" => measurement_type_exclude::Bool = false
"Abnormal" => abnormal::Union{Bool, Nothing} = nothing
"RangeLow" => range_low::Union{NumericRange, Nothing} = nothing
"RangeHigh" => range_high::Union{NumericRange, Nothing} = nothing
"RangeLowRatio" => range_low_ratio::Union{NumericRange, Nothing} = nothing
"RangeHighRatio" => range_high_ratio::Union{NumericRange, Nothing} = nothing
"ValueAsNumber" => value_as_number::Union{NumericRange, Nothing} = nothing
"ValueAsConcept" => value_as_concept::Vector{Concept} = Concept[]
"Operator" => operator::Vector{Concept} = Concept[]
"Unit" => unit::Vector{Concept} = Concept[]
end
@unpack struct Observation <: Criteria
base::BaseCriteria
"ObservationSourceConcept" => observation_source_concept::Union{Int, Nothing} = nothing
"ObservationType" => observation_type::Vector{Concept} = Concept[]
"ObservationTypeExclude" => observation_type_exclude::Bool = false
"ValueAsString" => value_as_string::Union{TextFilter, Nothing} = nothing
"ValueAsNumber" => value_as_number::Union{NumericRange, Nothing} = nothing
"ValueAsConcept" => value_as_concept::Vector{Concept} = Concept[]
"Qualifier" => qualifier::Vector{Concept} = Concept[]
"Unit" => unit::Vector{Concept} = Concept[]
end
@unpack struct ObservationPeriod <: Criteria
base::BaseCriteria
"PeriodType" => period_type::Vector{Concept} = Concept[]
"PeriodTypeExclude" => period_type_exclude::Bool = false
"PeriodStartDate" => period_start_date::Union{DateRange, Nothing} = nothing
"PeriodEndDate" => period_end_date::Union{DateRange, Nothing} = nothing
"PeriodLength" => period_length::Union{NumericRange, Nothing} = nothing
"AgeAtStart" => age_at_start::Union{NumericRange, Nothing} = nothing
"AgeAtEnd" => age_at_end::Union{NumericRange, Nothing} = nothing
"UserDefinedPeriod" => user_defined_period::Union{Period, Nothing} = nothing
end
@unpack struct PayerPlanPeriod <: Criteria
base::BaseCriteria
"PeriodType" => period_type::Vector{Concept} = Concept[]
"PeriodTypeExclude" => period_type_exclude::Bool = false
"PeriodStartDate" => period_start_date::Union{DateRange, Nothing} = nothing
"PeriodEndDate" => period_end_date::Union{DateRange, Nothing} = nothing
"PeriodLength" => period_length::Union{NumericRange, Nothing} = nothing
"AgeAtStart" => age_at_start::Union{NumericRange, Nothing} = nothing
"AgeAtEnd" => age_at_end::Union{NumericRange, Nothing} = nothing
"PayerConcept" => payer_concept::Union{Int, Nothing} = nothing
"PlanConcept" => plan_concept::Union{Int, Nothing} = nothing
"SponsorConcept" => sponsor_concept::Union{Int, Nothing} = nothing
"StopReasonConcept" => stop_reason_concept::Union{Int, Nothing} = nothing
"StopReasonSourceConcept" => stop_reason_source_concept::Union{Int, Nothing} = nothing
"PayerSourceConcept" => payer_source_concept::Union{Int, Nothing} = nothing
"PlanSourceConcept" => plan_source_concept::Union{Int, Nothing} = nothing
"SponsorSourceConcept" => sponsor_source_concept::Union{Int, Nothing} = nothing
"UserDefinedPeriod" => user_defined_period::Union{Period, Nothing} = nothing
end
@unpack struct ProcedureOccurrence <: Criteria
base::BaseCriteria
"ProcedureSourceConcept" => procedure_source_concept::Union{Int, Nothing} = nothing
"ProcedureType" => procedure_type::Vector{Concept} = Concept[]
"ProcedureTypeExclude" => procedure_type_exclude::Bool = false
"Modifier" => modifier::Vector{Concept} = Concept[]
"Quantity" => quantity::Union{NumericRange, Nothing} = nothing
end
@unpack struct Specimen <: Criteria
base::BaseCriteria
"SpecimenSourceConcept" => specimen_source_concept::Union{Int, Nothing} = nothing
"SpecimenType" => specimen_type::Vector{Concept} = Concept[]
"SpecimenTypeExclude" => specimen_type_exclude::Bool = false
"Quantity" => quantity::Union{NumericRange, Nothing} = nothing
"Unit" => unit::Vector{Concept} = Concept[]
"AnatomicSite" => anatomic_site::Vector{Concept} = Concept[]
"DiseaseStatus" => disease_status::Vector{Concept} = Concept[]
"SourceId" => source_id::Union{TextFilter, Nothing} = nothing
end
@unpack struct VisitDetail <: Criteria
base::BaseCriteria
"VisitDetailStartDate" => visit_detail_start_date::Union{Date, Nothing} = nothing
"VisitDetailEndDate" => visit_detail_end_date::Union{Date, Nothing} = nothing
"VisitDetailTypeCS" => visit_detail_type_selection::Union{ConceptSetSelection, Nothing} = nothing
"VisitDetailSourceConcept" => visit_detail_source_concept::Union{Int, Nothing} = nothing
"VisitDetailLength" => visit_detail_length::Union{NumericRange, Nothing} = nothing
"GenderCS" => gender_selection::Union{ConceptSetSelection, Nothing} = nothing
"ProviderSpecialtyCS" => provider_specialty_selection::Union{ConceptSetSelection, Nothing} = nothing
"PlaceOfServiceCS" => place_of_service_selection::Union{ConceptSetSelection, Nothing} = nothing
"PlaceOfServiceLocation" => place_of_service_location::Union{Int, Nothing} = nothing
end
@unpack struct VisitOccurrence <: Criteria
base::BaseCriteria
"PlaceOfService" => place_of_service::Vector{Concept} = Concept[]
"PlaceOfServiceLocation" => place_of_service_location::Union{Int, Nothing} = nothing
"VisitSourceConcept" => visit_source_concept::Union{Int, Nothing} = nothing
"VisitLength" => visit_length::Union{NumericRange, Nothing} = nothing
"VisitTypeExclude" => visit_type_exclude::Bool = false
end
function unpack!(::Type{Criteria}, data::Dict)
for type in (ConditionEra, ConditionOccurrence, Death,
DeviceExposure, DoseEra, DrugEra, DrugExposure,
LocationRegion, Measurement, Observation,
ObservationPeriod, PayerPlanPeriod,
ProcedureOccurrence, Specimen, VisitDetail,
VisitOccurrence)
key = string(nameof(type))
if haskey(data, key)
subdata = data[key]
retval = unpack!(type, subdata)
if isempty(subdata)
delete!(data, key)
end
return retval
end
end
return unpack!(UnknownCriteria, data)
end
@unpack struct CohortExpression
"AdditionalCriteria" => additional_criteria::Union{CriteriaGroup, Nothing} = nothing
"CensorWindow" => censor_window::Union{Period, Nothing} = nothing
"CensoringCriteria" => censoring_criteria::Vector{Criteria} = Criteria[]
"CollapseSettings" => collapse_settings::CollapseSettings
"ConceptSets" => concept_sets::Vector{ConceptSet} = ConceptSet[]
"EndStrategy" => end_strategy::Union{EndStrategy, Nothing} = nothing
"ExpressionLimit" => expression_limit::ResultLimit
"InclusionRules" => inclusion_rules::Vector{InclusionRule} = InclusionRule[]
"PrimaryCriteria" => primary_criteria::PrimaryCriteria
"QualifiedLimit" => qualified_limit::ResultLimit
"Title" => title::Union{String, Nothing} = nothing
"cdmVersionRange" => version_range::Union{String, Nothing} = nothing
end
unpack!(data) = unpack!(CohortExpression, data)
| OHDSICohortExpressions | https://github.com/MechanicalRabbit/OHDSICohortExpressions.jl.git |
|
[
"Apache-2.0"
] | 0.2.0 | f02458b04bd907651063c380900ef2e1c9941acc | code | 36621 | using JSON
using Dates
using PrettyPrinting
using FunSQL:
FunSQL, Agg, Append, As, Bind, Define, From, Fun, Get, Group, Join, LeftJoin,
Partition, Select, Var, Where, With, render, SQLClause, SQLNode, SQLTable, ID
struct TranslateContext
end
struct SwitchByDialectNode <: FunSQL.AbstractSQLNode
over::Union{SQLNode, Nothing}
cases::Vector{Symbol}
branches::Vector{SQLNode}
default::SQLNode
SwitchByDialectNode(; over = nothing, cases, branches, default) =
new(over, cases, branches, default)
end
SwitchByDialect(args...; kws...) =
SwitchByDialectNode(args...; kws...) |> SQLNode
function FunSQL.quoteof(n::SwitchByDialectNode, ctx)
ex = Expr(:call, nameof(SwitchByDialect))
push!(ex.args, Expr(:kw, :cases, Expr(:vect, Any[QuoteNode(case) for case in n.cases]...)))
push!(ex.args, Expr(:kw, :branches, Expr(:vect, Any[FunSQL.quoteof(branch, ctx) for branch in n.branches]...)))
push!(ex.args, Expr(:kw, :default, FunSQL.quoteof(n.default, ctx)))
if n.over !== nothing
ex = Expr(:call, :|>, FunSQL.quoteof(n.over, ctx), ex)
end
ex
end
function FunSQL.resolve(n::SwitchByDialectNode, ctx)
q = n.default
for (i, case) in enumerate(n.cases)
if case === ctx.catalog.dialect.name
q = n.branches[i]
break
end
end
over = n.over
if over !== nothing
q = over |> q
end
FunSQL.resolve(q, ctx)
end
function FunSQL.resolve_scalar(n::SwitchByDialectNode, ctx)
q = n.default
for (i, case) in enumerate(n.cases)
if case === ctx.catalog.dialect.name
q = n.branches[i]
break
end
end
over = n.over
if over !== nothing
q = over |> q
end
FunSQL.resolve_scalar(q, ctx)
end
p2e(p) =
SwitchByDialect(cases = [:sqlserver], branches = [Fun.case(p, 1, 0)], default = p)
force_p2e(p) =
SwitchByDialect(cases = [:sqlserver], branches = [p], default = Fun.case(p, 1, 0))
e2p(e) =
SwitchByDialect(cases = [:sqlserver], branches = [e .!= 0], default = e)
FunSQL.arity(::Val{:extract_year}) = 1:1
function FunSQL.serialize!(::Val{:extract_year}, args::Vector{SQLClause}, ctx)
if ctx.dialect.name === :sqlserver
FunSQL.@serialize! "year" args ctx
else
FunSQL.@serialize! "EXTRACT(YEAR FROM ?)" args ctx
end
end
FunSQL.arity(::Val{:dateadd_day}) = 2:2
function FunSQL.serialize!(::Val{:dateadd_day}, args::Vector{SQLClause}, ctx)
if ctx.dialect.name === :sqlserver
FunSQL.@serialize! "dateadd(day, ?, ?)" [args[2], args[1]] ctx
else
FunSQL.@serialize! "+" args ctx
end
end
FunSQL.arity(::Val{:datediff_day}) = 2:2
function FunSQL.serialize!(::Val{:datediff_day}, args::Vector{SQLClause}, ctx)
if ctx.dialect.name === :sqlserver || ctx.dialect.name === :spark
FunSQL.@serialize! "datediff(day, ?, ?)" [args[2], args[1]] ctx
else
FunSQL.@serialize! "-" args ctx
end
end
translate(c::AbstractString; cohort_definition_id = 0) =
translate(JSON.parse(c), cohort_definition_id = cohort_definition_id)
translate(c::Dict; cohort_definition_id = 0) =
translate(unpack!(deepcopy(c)), cohort_definition_id = cohort_definition_id)
function translate(c::CohortExpression; cohort_definition_id = 0)
@assert c.censor_window.start_date === c.censor_window.end_date === nothing
q = translate(c.primary_criteria)
if c.additional_criteria !== nothing && !isempty(c.additional_criteria)
q = q |>
translate(c.additional_criteria)
q = q |>
translate(c.qualified_limit)
end
for r in c.inclusion_rules
q = q |>
translate(r.expression)
end
q = q |>
translate(c.expression_limit)
if c.end_strategy !== nothing
q = q |>
translate(c.end_strategy)
else
q = q |>
Define(:end_date => Get.op_end_date)
end
q = q |>
Partition(order_by = [Get.person_id, Get.event_id]) |>
Define(:row_number => Agg.row_number())
for cc in c.censoring_criteria
q = q |>
LeftJoin(:censoring => translate(cc),
Fun.and(Get.person_id .== Get.censoring.person_id,
Get.start_date .<= Get.censoring.start_date,
Get.op_end_date .>= Get.censoring.start_date)) |>
Partition(Get.row_number, name = :min) |>
Partition(Get.row_number, order_by = [Get.row_number]) |>
Where(Agg.row_number() .== 1) |>
Define(:end_date => Fun.least(Get.end_date, Agg.min(Get.censoring.start_date, over = Get.min)))
end
q = q |>
translate(c.collapse_settings)
for s in c.concept_sets
q = q |>
translate(s)
end
q = q |>
Select(
:cohort_definition_id => cohort_definition_id,
:subject_id => Get.person_id,
:cohort_start_date => Get.start_date,
:cohort_end_date => Get.end_date)
end
function translate(r::ResultLimit; order_by = [Get.start_date])
if r.type == ALL
return Define()
end
if r.type == LAST
order_by = [Fun.datediff_day(order_by[1], Date(2020, 1, 1)), order_by[2:end]...]
end
Partition(Get.person_id, order_by = order_by) |>
Where(Agg.row_number() .== 1)
end
function translate(d::DateOffsetStrategy)
field =
d.date_field == START_DATE ? Get.start_date :
d.date_field == END_DATE ? Get.end_date :
nothing
Define(:end_date => dateadd_day(field, d.offset)) |>
Define(:end_date => Fun.case(Get.end_date .<= Get.op_end_date,
Get.end_date, Get.op_end_date))
end
function translate(s::CustomEraStrategy)
@assert s.offset == 0
@assert s.days_supply_override === nothing
gap = s.gap_days
q = From(:drug_exposure) |>
Where(Fun.or(Fun.in(Get.drug_concept_id, From("concept_set_$(s.drug_codeset_id)") |> Select(Get.concept_id)),
Fun.in(Get.drug_source_concept_id, From("concept_set_$(s.drug_codeset_id)") |> Select(Get.concept_id)))) |>
Define(:start_date => Get.drug_exposure_start_date,
:end_date => Fun.coalesce(Get.drug_exposure_end_date,
Fun.dateadd_day(Get.drug_exposure_start_date, Get.days_supply),
dateadd_day(Get.drug_exposure_start_date, 1))) |>
Define(:end_date => dateadd_day(Get.end_date, gap)) |>
Partition(Get.person_id, order_by = [Get.start_date], frame = (mode = :rows, start = -Inf, finish = -1)) |>
Define(:boundary => Agg.max(Get.end_date)) |>
Define(:bump => Fun.case(Get.start_date .<= Get.boundary, 0, 1)) |>
Partition(Get.person_id, order_by = [Get.start_date, .- Get.bump], frame = :rows) |>
Define(:group => Agg.sum(Get.bump)) |>
Group(Get.person_id, Get.group) |>
Define(:start_date => Agg.min(Get.start_date),
:end_date => dateadd_day(Agg.max(Get.end_date), - gap))
q = LeftJoin(:custom_era => q,
Fun.and(Get.person_id .== Get.custom_era.person_id,
Fun.between(Get.start_date, Get.custom_era.start_date, Get.custom_era.end_date))) |>
Define(:end_date => Fun.least(Get.op_end_date, Get.custom_era.end_date))
q
end
function dateadd_day(n, delta::Integer)
if iszero(delta)
return n
end
Fun.dateadd_day(n, delta)
end
function translate(c::PrimaryCriteria)
@assert length(c.criteria_list) >= 1
q = translate(c.criteria_list[1])
if length(c.criteria_list) > 1
args = [translate(l) for l in c.criteria_list[2:end]]
q = q |>
Append(args = args)
end
q = q |>
Join(:op => From(:observation_period) |>
Define(:start_date => Get.observation_period_start_date,
:end_date => Get.observation_period_end_date),
Get.person_id .== Get.op.person_id)
q = q |>
Define(:op_start_date => Get.op.start_date,
:op_end_date => Get.op.end_date)
l = dateadd_day(Get.op.start_date, c.observation_window.prior_days)
r = dateadd_day(Get.op.end_date, - c.observation_window.post_days)
q = q |>
Where(Fun.and(l .<= Get.start_date, Get.start_date .<= r))
q = q |>
translate(c.primary_limit, order_by = [Get.sort_date, Get.event_id])
q
end
function translate(d::ConditionEra)
@assert d.era_start_date === nothing
@assert d.era_end_date === nothing
@assert d.era_length === nothing
@assert d.age_at_start === nothing
@assert d.age_at_end === nothing
q = From(:condition_era) |>
Define(:concept_id => Get.condition_concept_id,
:event_id => Get.condition_era_id,
:start_date => Get.condition_era_start_date,
:end_date => Get.condition_era_end_date,
:sort_date => Get.condition_era_start_date,
:visit_occurrence_id => 0)
if d.era_length !== nothing
field = Fun.datediff_day(Get.drug_era_end_date, Get.drug_era_start_date)
q = q |>
Where(translate(d.era_length) |> Bind(:field => field))
end
if d.occurrence_count !== nothing
q = q |>
Where(translate(d.occurrence_count) |> Bind(:field => Get.condition_occurrence_count))
end
q = q |>
translate(d.base)
q
end
function translate(c::ConditionOccurrence)
@assert isempty(c.condition_status)
@assert c.stop_reason === nothing
q = From(:condition_occurrence) |>
Define(:concept_id => Get.condition_concept_id,
:event_id => Get.condition_occurrence_id,
:start_date => Get.condition_start_date,
:end_date => Fun.coalesce(Get.condition_end_date,
dateadd_day(Get.condition_start_date, 1)),
:sort_date => Get.condition_start_date)
if c.condition_source_concept !== nothing
q = q |>
Where(Fun.in(Get.condition_source_concept_id,
From("concept_set_$(c.condition_source_concept)") |> Select(Get.concept_id)))
end
if !isempty(c.condition_type)
args = SQLNode[Get.condition_type_concept_id, SQLNode[t.concept_id for t in c.condition_type]...]
if !c.condition_type_exclude
p = Fun.in(args = args)
else
p = Fun."not in"(args = args)
end
q = q |>
Where(p)
end
q = q |>
translate(c.base)
q
end
function translate(d::Death)
@assert isempty(d.death_type)
@assert !d.death_type_exclude
q = From(:death) |>
Define(:concept_id => Get.cause_concept_id,
:event_id => SwitchByDialect(cases = [:sqlserver], branches = [Get.person_id], default = 0),
:start_date => Get.death_date,
:end_date => dateadd_day(Get.death_date, 1),
:sort_date => Get.death_date,
:visit_occurrence_id => SwitchByDialect(cases = [:sqlserver], branches = [Get.person_id], default = 0))
if d.death_source_concept !== nothing
q = q |>
Where(Fun.in(Get.cause_source_concept_id,
From("concept_set_$(d.death_source_concept)") |> Select(Get.concept_id)))
end
q = q |>
translate(d.base)
q
end
function translate(d::DeviceExposure)
@assert isempty(d.device_type)
@assert !d.device_type_exclude
@assert d.quantity === nothing
@assert d.unique_device_id === nothing
q = From(:device_exposure) |>
Define(:concept_id => Get.device_concept_id,
:event_id => Get.device_exposure_id,
:start_date => Get.device_exposure_start_date,
:end_date => Fun.coalesce(Get.device_exposure_end_date,
dateadd_day(Get.device_exposure_start_date, 1)),
:sort_date => Get.device_exposure_start_date)
if d.device_source_concept !== nothing
q = q |>
Where(Fun.in(Get.device_source_concept_id,
From("concept_set_$(d.device_source_concept)") |> Select(Get.concept_id)))
end
q = q |>
translate(d.base)
q
end
function translate(d::DoseEra)
@assert d.dose_value === nothing
@assert d.era_start_date === nothing
@assert d.era_end_date === nothing
@assert d.age_at_start === nothing
@assert d.age_at_end === nothing
@assert isempty(d.unit)
q = From(:dose_era) |>
Define(:concept_id => Get.drug_concept_id,
:event_id => Get.dose_era_id,
:start_date => Get.dose_era_start_date,
:end_date => Get.dose_era_end_date,
:sort_date => Get.dose_era_start_date,
:visit_occurrence_id => 0)
if d.era_length !== nothing
field = Fun.datediff_day(Get.dose_era_end_date, Get.dose_era_start_date)
q = q |>
Where(translate(d.era_length) |> Bind(:field => field))
end
q = q |>
translate(d.base)
q
end
function translate(d::DrugEra)
@assert d.era_start_date === nothing
@assert d.era_end_date === nothing
@assert d.occurrence_count === nothing
@assert d.age_at_start === nothing
@assert d.age_at_end === nothing
q = From(:drug_era) |>
Define(:concept_id => Get.drug_concept_id,
:event_id => Get.drug_era_id,
:start_date => Get.drug_era_start_date,
:end_date => Get.drug_era_end_date,
:sort_date => Get.drug_era_start_date,
:visit_occurrence_id => 0)
if d.era_length !== nothing
field = Fun.datediff_day(Get.drug_era_end_date, Get.drug_era_start_date)
q = q |>
Where(translate(d.era_length) |> Bind(:field => field))
end
q = q |>
translate(d.base)
q
end
function translate(d::DrugExposure)
@assert isempty(d.drug_type)
@assert !d.drug_type_exclude
@assert d.refills === nothing
@assert d.quantity === nothing
@assert d.days_supply === nothing
@assert isempty(d.route_concept)
@assert d.effective_drug_dose === nothing
@assert isempty(d.dose_unit)
@assert d.lot_number === nothing
@assert d.stop_reason === nothing
q = From(:drug_exposure) |>
Define(:concept_id => Get.drug_concept_id,
:event_id => Get.drug_exposure_id,
:start_date => Get.drug_exposure_start_date,
:end_date => Fun.coalesce(Get.drug_exposure_end_date,
Fun.dateadd_day(Get.drug_exposure_start_date, Get.days_supply),
dateadd_day(Get.drug_exposure_start_date, 1)),
:sort_date => Get.drug_exposure_start_date)
if d.drug_source_concept !== nothing
q = q |>
Where(Fun.in(Get.drug_source_concept_id,
From("concept_set_$(d.drug_source_concept)") |> Select(Get.concept_id)))
end
q = q |>
translate(d.base)
q
end
function translate(m::Measurement)
@assert isempty(m.measurement_type)
@assert !m.measurement_type_exclude
@assert m.abnormal === nothing
@assert isempty(m.operator)
q = From(:measurement) |>
Define(:concept_id => Get.measurement_concept_id,
:event_id => Get.measurement_id,
:start_date => Get.measurement_date,
:end_date => dateadd_day(Get.measurement_date, 1),
:sort_date => Get.measurement_date)
if m.measurement_source_concept !== nothing
q = q |>
Where(Fun.in(Get.measurement_source_concept_id,
From("concept_set_$(m.measurement_source_concept)") |> Select(Get.concept_id)))
end
if !isempty(m.value_as_concept)
args = [Get.value_as_concept_id .== v.concept_id
for v in m.value_as_concept]
q = q |>
Where(Fun.or(args = args))
end
if m.range_low !== nothing
q = q |>
Where(translate(m.range_low) |> Bind(:field => Get.range_low))
end
if m.range_high !== nothing
q = q |>
Where(translate(m.range_high) |> Bind(:field => Get.range_high))
end
if m.range_low_ratio !== nothing
q = q |>
Where(translate(m.range_low_ratio) |> Bind(:field => Get.value_as_number ./ Fun.nullif(Get.range_low, 0)))
end
if m.range_high_ratio !== nothing
q = q |>
Where(translate(m.range_high_ratio) |> Bind(:field => Get.value_as_number ./ Fun.nullif(Get.range_high, 0)))
end
if m.value_as_number !== nothing
q = q |>
Where(translate(m.value_as_number) |> Bind(:field => Get.value_as_number))
end
if !isempty(m.unit)
args = [Get.unit_concept_id .== u.concept_id
for u in m.unit]
q = q |>
Where(Fun.or(args = args))
end
q = q |>
translate(m.base)
q
end
function translate(o::Observation)
@assert isempty(o.observation_type)
@assert !o.observation_type_exclude
@assert o.value_as_string === nothing
@assert isempty(o.qualifier)
q = From(:observation) |>
Define(:concept_id => Get.observation_concept_id,
:event_id => Get.observation_id,
:start_date => Get.observation_date,
:end_date => dateadd_day(Get.observation_date, 1),
:sort_date => Get.observation_date)
if o.observation_source_concept !== nothing
q = q |>
Where(Fun.in(Get.observation_source_concept_id,
From("concept_set_$(o.observation_source_concept)") |> Select(Get.concept_id)))
end
if !isempty(o.value_as_concept)
args = [Get.value_as_concept_id .== v.concept_id
for v in o.value_as_concept]
q = q |>
Where(Fun.or(args = args))
end
if o.value_as_number !== nothing
q = q |>
Where(translate(o.value_as_number) |> Bind(:field => Get.value_as_number))
end
if !isempty(o.unit)
args = [Get.unit_concept_id .== u.concept_id
for u in o.unit]
q = q |>
Where(Fun.or(args = args))
end
q = q |>
translate(o.base)
q
end
function translate(o::ObservationPeriod)
@assert isempty(o.period_type)
@assert !o.period_type_exclude
@assert o.period_start_date === nothing
@assert o.period_end_date === nothing
@assert o.age_at_start === nothing
@assert o.age_at_end === nothing
q = From(:observation_period) |>
Define(:event_id => Get.observation_period_id,
:start_date => Get.observation_period_start_date,
:end_date => Get.observation_period_end_date,
:sort_date => Get.observation_period_start_date,
:visit_occurrence_id => 0)
if o.period_length !== nothing
field = Fun.datediff_day(Get.end_date, Get.start_date)
q = q |>
Where(translate(o.period_length) |> Bind(:field => field))
end
if o.user_defined_period !== nothing
user_start_date = o.user_defined_period.start_date
user_end_date = o.user_defined_period.end_date
if user_start_date !== nothing
q = q |>
Where(Fun.and(Get.start_date .<= user_start_date,
Get.end_date .>= user_start_date))
end
if user_end_date !== nothing
q = q |>
Where(Fun.and(Get.start_date .<= user_end_date,
Get.end_date .>= user_end_date))
end
if user_start_date !== nothing
q = q |>
Define(:start_date => Fun.cast(user_start_date, "DATE"))
end
if user_end_date !== nothing
q = q |>
Define(:end_date => Fun.cast(user_end_date, "DATE"))
end
end
q = q |>
translate(o.base)
q
end
function translate(p::ProcedureOccurrence)
@assert isempty(p.procedure_type)
@assert !p.procedure_type_exclude
@assert isempty(p.modifier)
@assert p.quantity === nothing
q = From(:procedure_occurrence) |>
Define(:concept_id => Get.procedure_concept_id,
:event_id => Get.procedure_occurrence_id,
:start_date => Get.procedure_date,
:end_date => dateadd_day(Get.procedure_date, 1),
:sort_date => Get.procedure_date)
if p.procedure_source_concept !== nothing
q = q |>
Where(Fun.in(Get.procedure_source_concept_id,
From("concept_set_$(p.procedure_source_concept)") |> Select(Get.concept_id)))
end
q = q |>
translate(p.base)
q
end
function translate(s::Specimen)
@assert s.specimen_source_concept === nothing
@assert isempty(s.specimen_type)
@assert !s.specimen_type_exclude
@assert s.quantity === nothing
@assert isempty(s.unit)
@assert isempty(s.anatomic_site)
@assert isempty(s.disease_status)
@assert s.source_id === nothing
q = From(:specimen) |>
Define(:concept_id => Get.specimen_concept_id,
:event_id => Get.specimen_id,
:start_date => Get.specimen_date,
:end_date => dateadd_day(Get.specimen_date, 1),
:sort_date => Get.specimen_date)
q = q |>
translate(s.base)
q
end
function translate(v::VisitDetail)
@assert v.visit_detail_start_date === nothing
@assert v.visit_detail_end_date === nothing
@assert v.visit_detail_type_selection === nothing
@assert v.visit_detail_length === nothing
@assert v.gender_selection === nothing
@assert v.provider_specialty_selection === nothing
@assert v.place_of_service_selection === nothing
@assert v.place_of_service_location === nothing
q = From(:visit_detail) |>
Define(:concept_id => Get.visit_detail_concept_id,
:event_id => Get.visit_detail_id,
:start_date => Get.visit_detail_start_date,
:end_date => Get.visit_detail_end_date,
:sort_date => Get.visit_detail_start_date)
if v.visit_detail_source_concept !== nothing
q = q |>
Where(Fun.in(Get.visit_detail_source_concept_id,
From("concept_set_$(v.visit_detail_source_concept)") |> Select(Get.concept_id)))
end
q = q |>
translate(v.base)
q
end
function translate(v::VisitOccurrence)
@assert isempty(v.place_of_service)
@assert v.place_of_service_location === nothing
@assert v.visit_length === nothing
@assert !v.visit_type_exclude
q = From(:visit_occurrence) |>
Define(:concept_id => Get.visit_concept_id,
:event_id => Get.visit_occurrence_id,
:start_date => Get.visit_start_date,
:end_date => Get.visit_end_date,
:sort_date => Get.visit_start_date)
if v.visit_source_concept !== nothing
q = q |>
Where(Fun.in(Get.visit_source_concept_id,
From("concept_set_$(v.visit_source_concept)") |> Select(Get.concept_id)))
end
q = q |>
translate(v.base)
q
end
function translate(b::BaseCriteria)
@assert b.occurrence_end_date === nothing
q = Define()
if b.codeset_id !== nothing
q = q |>
Where(Fun.in(Get.concept_id,
From("concept_set_$(b.codeset_id)") |> Select(Get.concept_id)))
end
if b.first
q = q |>
Partition(Get.person_id, order_by = [Get.sort_date, Get.event_id]) |>
Where(Agg.row_number() .== 1)
end
if b.occurrence_start_date !== nothing
q = q |>
Where(translate(b.occurrence_start_date) |> Bind(:field => Get.start_date))
end
if b.age !== nothing || !isempty(b.gender)
q = q |>
Join(:person => From(:person),
Get.person_id .== Get.person.person_id)
end
if b.age !== nothing
q = q |>
Define(:age => Fun.extract_year(Get.start_date) .- Get.person.year_of_birth) |>
Where(translate(b.age) |> Bind(:field => Get.age))
end
if !isempty(b.gender)
args = [Get.person.gender_concept_id .== c.concept_id
for c in b.gender]
q = q |>
Where(Fun.or(args = args))
end
if !isempty(b.provider_specialty)
args = [Get.provider.specialty_concept_id .== c.concept_id
for c in b.provider_specialty]
q = q |>
Join(:provider => From(:provider),
Get.provider_id .== Get.provider.provider_id) |>
Where(Fun.or(args = args))
end
if !isempty(b.visit_type)
args = [Get.visit.visit_concept_id .== c.concept_id
for c in b.visit_type]
q = q |>
Join(:visit => From(:visit_occurrence),
Fun.and(Get.person_id .== Get.visit.person_id,
Get.visit_occurrence_id .== Get.visit.visit_occurrence_id)) |>
Where(Fun.or(args = args))
end
if b.correlated_criteria !== nothing
q = q |>
Join(:op_ => From(:observation_period),
Get.person_id .== Get.op_.person_id)
q = q |>
Define(:op_start_date => Get.op_.observation_period_start_date,
:op_end_date => Get.op_.observation_period_end_date)
q = q |>
Where(Fun.and(Get.op_start_date .<= Get.start_date, Get.start_date .<= Get.op_end_date))
q = q |>
translate(b.correlated_criteria)
end
q
end
function criteria_name!(args, name)
s = "c$(length(args) + 1)"
if name !== nothing
s = "$(name)_$(s)"
end
criteria_name = Symbol(s)
push!(args, Get(criteria_name))
criteria_name
end
function translate(c::CriteriaGroup; name = nothing)
!isempty(c) || return Define()
is_all = c.type == ALL_CRITERIA || (c.type == AT_LEAST_CRITERIA && c.count == length(c.demographic_criteria) + length(c.correlated_criteria) + length(c.groups))
is_any = c.type == ANY_CRITERIA || (c.type == AT_LEAST_CRITERIA && c.count == 1)
is_none = c.type == AT_MOST_CRITERIA && c.count == 0
args = SQLNode[]
q = Join(:person => From(:person),
Get.person_id .== Get.person.person_id,
optional = true) |>
Define(:age => Fun.extract_year(Get.start_date) .- Get.person.year_of_birth)
for criteria in c.demographic_criteria
criteria_name = nothing
if !(name === nothing && is_all)
criteria_name = criteria_name!(args, name)
end
q = q |>
translate(criteria, name = criteria_name)
end
q = q |>
Partition(order_by = [Get.person_id, Get.event_id]) |>
Define(:row_number => Agg.row_number())
for criteria in c.correlated_criteria
criteria_name = nothing
if !(name === nothing && is_all)
criteria_name = criteria_name!(args, name)
end
q = q |>
translate(criteria, name = criteria_name)
end
for group in c.groups
criteria_name = nothing
if !(name === nothing && is_all)
criteria_name = criteria_name!(args, name)
end
q = q |>
translate(group, name = criteria_name)
end
if !(name === nothing && is_all)
if is_all
p = Fun.and(args = e2p.(args))
elseif is_any
p = Fun.or(args = e2p.(args))
elseif is_none
args = [Fun.not(e2p(arg)) for arg in args]
p = Fun.and(args = args)
else
args = [force_p2e(arg) for arg in args]
n = length(args) > 1 ? Fun."+"(args = args) : args[1]
@assert c.type in (AT_MOST_CRITERIA, AT_LEAST_CRITERIA)
if c.type == AT_MOST_CRITERIA
p = n .<= c.count
elseif c.type == AT_LEAST_CRITERIA
p = n .>= c.count
end
end
if name !== nothing
q = q |>
Define(name => p2e(p))
else
q = q |>
Where(p)
end
end
q
end
function translate(d::DemographicCriteria; name = nothing)
@assert isempty(d.ethnicity)
@assert isempty(d.race)
@assert d.occurrence_end_date === nothing
args = SQLNode[]
if d.age !== nothing
push!(args, translate(d.age) |> Bind(:field => Get.age))
end
if !isempty(d.gender)
push!(args, Fun.in(args = SQLNode[Get.person.gender_concept_id, SQLNode[item.concept_id for item in d.gender]...]))
end
if d.occurrence_start_date !== nothing
push!(args, translate(d.occurrence_start_date) |> Bind(:field => Get.start_date))
end
p = Fun.and(args = args)
if name !== nothing
q = Define(name => p2e(p))
else
q = Where(p)
end
q
end
function translate(c::CorrelatedCriteria; name = nothing)
@assert c.occurrence !== nothing &&
(c.occurrence.count_column === nothing || c.occurrence.count_column in ("DOMAIN_CONCEPT", "START_DATE"))
@assert c.occurrence.type in (AT_LEAST, AT_MOST, EXACTLY)
@assert c.criteria !== nothing
on_args = [Get.correlated.person_id .== Get.person_id]
if c.restrict_visit
push!(on_args, Get.correlated.visit_occurrence_id .== Get.visit_occurrence_id)
end
if !c.ignore_observation_period
push!(on_args, Fun.and(Get.op_start_date .<= Get.correlated.start_date,
Get.correlated.start_date .<= Get.op_end_date))
end
push!(on_args, translate(c.start_window, start = true, ignore_observation_period = c.ignore_observation_period))
if c.end_window !== nothing
push!(on_args, translate(c.end_window, start = false, ignore_observation_period = c.ignore_observation_period))
end
left = !(name === nothing && c.occurrence.type in (AT_LEAST, EXACTLY) && c.occurrence.count > 0)
q = Join(:correlated => translate(c.criteria),
Fun.and(args = on_args),
left = left)
if c.occurrence.type == AT_LEAST && c.occurrence.count == 1
q = q |>
Partition(Get.row_number, order_by = [Get.row_number]) |>
Where(Agg.row_number() .== 1)
if name !== nothing
q = q |>
Define(name => p2e(Fun."is not null"(Get.correlated.event_id)))
end
return q
end
if c.occurrence.type in (EXACTLY, AT_MOST) && c.occurrence.count == 0
if name !== nothing
q = q |>
Partition(Get.row_number, order_by = [Get.row_number]) |>
Where(Agg.row_number() .== 1) |>
Define(name => p2e(Fun."is null"(Get.correlated.event_id)))
else
q = q |>
Where(Fun."is null"(Get.correlated.event_id))
end
return q
end
if c.occurrence.is_distinct
value = Get.concept_id
if c.occurrence.count_column == "START_DATE"
value = Get.start_date
end
count = Agg.max(Agg.dense_rank(), over = Get.count)
if left
count = Fun.case(Fun."is not null"(Get.correlated.event_id), count, 0)
end
q = q |>
Partition(Get.row_number, order_by = [Get.correlated |> value]) |>
Partition(Get.row_number, name = :count) |>
Partition(Get.row_number, order_by = [Get.row_number]) |>
Where(Agg.row_number() .== 1) |>
Define(:count => count)
else
q = q |>
Partition(Get.row_number, name = :count) |>
Partition(Get.row_number, order_by = [Get.row_number]) |>
Where(Agg.row_number() .== 1) |>
Define(:count => Agg.count(Get.correlated.event_id, over = Get.count))
end
if c.occurrence.type == AT_LEAST
p = Get.count .>= c.occurrence.count
elseif c.occurrence.type == AT_MOST
p = Get.count .<= c.occurrence.count
elseif c.occurrence.type == EXACTLY
p = Get.count .== c.occurrence.count
end
if name !== nothing
q = q |>
Define(name => p2e(p))
else
q = q |>
Where(p)
end
q
end
function translate(w::Window; start::Bool, ignore_observation_period::Bool)
index_date_field =
w.use_index_end == true ? Get.end_date : Get.start_date
event_date_field =
if start
w.use_event_end == true ?
Get.correlated.end_date : Get.correlated.start_date
else
w.use_event_end == false ?
Get.correlated.start_date : Get.correlated.end_date
end
l = nothing
r = nothing
if w.start.days !== nothing
l = dateadd_day(index_date_field, w.start.days * w.start.coeff)
elseif !ignore_observation_period
l = w.start.coeff == -1 ? Get.op_start_date : Get.op_end_date
end
if w.end_.days !== nothing
r = dateadd_day(index_date_field, w.end_.days * w.end_.coeff)
elseif !ignore_observation_period
r = w.end_.coeff == -1 ? Get.op_start_date : Get.op_end_date
end
args = []
if l !== nothing
push!(args, l .<= event_date_field)
end
if r !== nothing
push!(args, event_date_field .<= r)
end
Fun.and(args = args)
end
function translate(c::CollapseSettings)
@assert c.collapse_type == ERA
gap = c.era_pad
Define(:end_date => dateadd_day(Get.end_date, gap)) |>
Partition(Get.person_id, order_by = [Get.start_date], frame = (mode = :rows, start = -Inf, finish = -1)) |>
Define(:boundary => Agg.max(Get.end_date)) |>
Define(:bump => Fun.case(Get.start_date .<= Get.boundary, 0, 1)) |>
Partition(Get.person_id, order_by = [Get.start_date, .- Get.bump], frame = :rows) |>
Define(:group => Agg.sum(Get.bump)) |>
Group(Get.person_id, Get.group) |>
Define(:start_date => Agg.min(Get.start_date),
:end_date => dateadd_day(Agg.max(Get.end_date), - gap))
end
function translate(c::ConceptSet)
include = ConceptSetItem[]
exclude = ConceptSetItem[]
for item in c.items
if !item.is_excluded
push!(include, item)
else
push!(exclude, item)
end
end
q = translate(include)
if !isempty(exclude)
q = q |>
LeftJoin(:excluded => translate(exclude),
Get.concept_id .== Get.excluded.concept_id) |>
Where(Fun."is null"(Get.excluded.concept_id))
end
q = q |>
Select(Get.concept_id)
q = With("concept_set_$(c.id)" => q)
q
end
function translate(items::Vector{ConceptSetItem}; skip_mapped = false)
# TODO: Fun.in
args = SQLNode[item.concept.concept_id for item in items]
q = From(:concept) |>
Where(Fun.in(args = SQLNode[Get.concept_id, args...]))
with_descendants = [item for item in items if item.include_descendants]
if !isempty(with_descendants)
args = [item.concept.concept_id for item in with_descendants]
q = q |>
Append(
From(:concept) |>
Where(Fun."is null"(Get.invalid_reason)) |>
Join(From(:concept_ancestor),
Get.concept_id .== Get.descendant_concept_id) |>
Where(Fun.in(args = SQLNode[Get.ancestor_concept_id, args...]))) |>
Group(Get.concept_id)
end
with_mapped = [item for item in items if item.include_mapped]
if !isempty(with_mapped) && !skip_mapped
q = q |>
Append(
translate(with_mapped, skip_mapped = true) |>
Join(From(:concept_relationship) |>
Where(Fun.and(Fun."is_null"(Get.invalid_reason),
Get.relationship_id .== "Maps to")),
Get.concept_id .== Get.concept_id_2) |>
Define(:concept_id => Get.concept_id_1))
end
q
end
function translate(r::Union{NumericRange, DateRange})
if r.op == GT
Var.field .> r.value
elseif r.op == GTE
Var.field .>= r.value
elseif r.op == LT
Var.field .< r.value
elseif r.op == LTE
Var.field .<= r.value
elseif r.op == EQ
Var.field .== r.value
elseif r.op == NEQ
Var.field .!= r.value
elseif r.op == BT
Fun.between(Var.field, r.value, r.extent)
elseif r.op == NBT
Fun."not between"(Var.field, r.value, r.extent)
end
end
| OHDSICohortExpressions | https://github.com/MechanicalRabbit/OHDSICohortExpressions.jl.git |
|
[
"Apache-2.0"
] | 0.2.0 | f02458b04bd907651063c380900ef2e1c9941acc | docs | 841 | # Release Notes
## v0.2.0
* Support for Spark/Databricks.
* Change the translation API. Instead of generating SQL that updates the
`cohort` table, translate a cohort definition into a FunSQL query that
returns the cohort as a query output.
* Require FunSQL >= 0.14.
## v0.1.5
* Fixed non-deterministic primary limit.
* Require FunSQL >= 0.11.
## v0.1.4
* Ignore empty criteria group (fixes #3).
## v0.1.3
* Drop temporary tables before creating them.
* Require FunSQL >= 0.9.
* Fixed `UndefKeywordError: keyword argument args not assigned`.
## v0.1.2
* Compatibitity with FunSQL 0.9 and PrettyPrinting 0.4.
## v0.1.1
* Upgraded to FunSQL 0.8.
## v0.1.0
- proof-of-concept implementation
- coverage of 0.1 of PhenotypeLibrary
- support for Microsoft SQL Server
- support for Amazon Redshift
- support for PostgreSQL
| OHDSICohortExpressions | https://github.com/MechanicalRabbit/OHDSICohortExpressions.jl.git |
|
[
"Apache-2.0"
] | 0.2.0 | f02458b04bd907651063c380900ef2e1c9941acc | docs | 2606 | # OHDSICohortExpressions.jl
*OHDSI Cohort Expressions is a re-implementation of OHDSI's Circe*
[![Zulip Chat][chat-img]][chat-url]
[![Open Issues][issues-img]][issues-url]
[![Apache License][license-img]][license-url]
This is a proof-of-concept implementation of a conversion from the JSON
cohort definitions used in the OHDSI ecosystem into an SQL transaction.
### Project Status
At this time, this implementation is able to convert all 797 cohorts
from PhenotypeLibrary v0.1 to generate SQL that works against Amazon
RedShift, Microsoft SQL Server, and PostgreSQL.
There are significant gaps in functionality. Many expressions available
in the JSON cohort definition have yet to be translated. In these cases,
an assertion error should arise. We have yet to write documentation,
perform code review, or construct regression tests. The API is in a
provisional form and very likely to change.
### Example Usage
First, load or generate a cohort definition in OHDSI Circe format.
In this example, we load the cohort definition from `demo/ex-10-2.json`,
which corresponds to [excercise 10.2][ex-10-2] from the Book of OHDSI.
```julia
cohort_definition = read("demo/ex-10-2.json", String)
```
Next, use `OHDSICohortExpressions.translate()` to convert this cohort
definition to a FunSQL query object.
```julia
using OHDSICohortExpressions: translate
q = translate(cohort_definition, cohort_definition_id = 1)
```
Run `DBInterface.connect()` to create a connection to an OMOP CDM database.
The arguments of `DBInterface.connect()` depend on the database engine and
connection parameters. Consult FunSQL documentation for more information.
```julia
using FunSQL, DBInterface
db = DBInterface.connect(FunSQL.SQLConnection{ … }, … )
```
Execute the query to return the corresponding cohort.
```julia
using DataFrames
cr = DBInterface.execute(db, q)
df = DataFrame(cr)
```
[julia]: https://julialang.org/downloads/
[julia-call]: https://www.rdocumentation.org/packages/JuliaCall/versions/0.17.4
[ex-10-2]: https://ohdsi.github.io/TheBookOfOhdsi/Cohorts.html#exr:exerciseCohortsSql
[chat-img]: https://img.shields.io/badge/chat-julia--zulip-blue
[chat-url]: https://julialang.zulipchat.com/#narrow/stream/237221-biology-health-and-medicine
[issues-img]: https://img.shields.io/github/issues/MechanicalRabbit/OHDSICohortExpressions.jl.svg
[issues-url]: https://github.com/MechanicalRabbit/OHDSICohortExpressions.jl/issues
[license-img]: https://img.shields.io/badge/license-Apache-blue.svg
[license-url]: https://raw.githubusercontent.com/MechanicalRabbit/OHDSICohortExpressions.jl/master/LICENSE
| OHDSICohortExpressions | https://github.com/MechanicalRabbit/OHDSICohortExpressions.jl.git |
|
[
"MIT"
] | 0.2.2 | ecdb4458557d6a4ab428e1f3dbfc23438ee7a604 | code | 254 | using Documenter, LinearCovarianceModels
makedocs(
sitename = "LinearCovarianceModels",
pages = [
"Introduction" => "index.md",
],
strict=true)
deploydocs(
repo = "github.com/saschatimme/LinearCovarianceModels.jl.git"
)
| LinearCovarianceModels | https://github.com/saschatimme/LinearCovarianceModels.jl.git |
|
[
"MIT"
] | 0.2.2 | ecdb4458557d6a4ab428e1f3dbfc23438ee7a604 | code | 16855 | module LinearCovarianceModels
export LCModel,
dim,
# families
generic_subspace,
generic_diagonal,
toeplitz,
tree,
trees,
# ML degree
ml_degree_witness,
MLDegreeWitness,
model,
parameters,
solutions,
is_dual,
ml_degree,
verify,
# solve specific instance
mle,
critical_points,
covariance_matrix,
logl,
gradient_logl,
hessian_logl,
classify_point,
# MLE helper
mle_system,
dual_mle_system,
# helpers
vec_to_sym,
sym_to_vec
using LinearAlgebra
import HomotopyContinuation
import HomotopyContinuation: dim, parameters, solutions
import Distributions: Normal
const HC = HomotopyContinuation
include("tree_data.jl")
outer(A) = A * A'
"""
rand_pos_def(n)
Create a random positive definite `n × n` matrix. The matrix is generated
by first creating a `n × n` matrix `X` where each entry is independently drawn
from the `Normal(μ=0, σ²=1.0)` distribution. Then `X*X'./n` is returned.
"""
rand_pos_def(n) = outer(rand(Normal(0, 1.0), n, n)) ./ n
n_vec_to_sym(k) = div(-1 + round(Int, sqrt(1 + 8k)), 2)
n_sym_to_vec(n) = binomial(n + 1, 2)
"""
vec_to_sym(v)
Converts a vector `v` to a symmetrix matrix by filling the lower triangular
part columnwise.
### Example
```
julia> v = [1,2,3, 4, 5, 6];
julia> vec_to_sym(v)
3×3 Array{Int64,2}:
1 2 3
2 4 5
3 5 6
```
"""
function vec_to_sym(s)
n = n_vec_to_sym(length(s))
S = Matrix{eltype(s)}(undef, n, n)
l = 1
for i = 1:n, j = i:n
S[i, j] = S[j, i] = s[l]
l += 1
end
S
end
"""
sym_to_vec(S)
Converts a symmetric matrix `S` to a vector by filling the vector with lower triangular
part iterating columnwise.
"""
sym_to_vec(S) = (n = size(S, 1); [S[i, j] for i = 1:n for j = i:n])
"""
LCModel(Σ::Matrix)
Create a linear covariance model from the parameterization `Σ`.
This uses as input a matrix of polynomials created by the `@var` macro from `HomotopyContinuation.jl`.
## Example
```
using HomotopyContinuation # load polynomials package
# use HomotopyContinuation to create variables θ₁, θ₂, θ₃.
@var θ[1:3]
# create our model as matrix of DynamicPolynomials
Σ = [θ[1] θ[2] θ[3]; θ[2] θ[1] θ[2]; θ[3] θ[2] θ[1]]
# create model
model = LCModel(Σ)
```
"""
struct LCModel{T1,T2<:Number}
Σ::Matrix{T1}
B::Vector{Matrix{T2}}
function LCModel(Σ::Matrix{T1}, B::Vector{Matrix{T2}}) where {T1,T2}
issymmetric(Σ) || throw(ArgumentError("Input is not a symmetric matrix!"))
new{T1,T2}(Σ, B)
end
end
LCModel(Σ::Matrix) = LCModel(Σ, get_basis(Σ))
LCModel(Σ::AbstractMatrix) = LCModel(Matrix(Σ .+ false))
function get_basis(Σ)
vars = HC.variables(vec(Σ))
map(1:length(vars)) do i
[
p(
vars[i] => 1,
vars[1:i-1] => zeros(Int, max(i - 1, 0)),
vars[i+1:end] => zeros(Int, max(length(vars) - i, 0)),
) for p in Σ
]
end
end
Base.size(M::LCModel) = (size(M.Σ, 1), length(M.B))
Base.size(M::LCModel, i::Int) = size(M)[i]
function Base.show(io::IO, M::LCModel)
println(io, "$(dim(M))-dimensional LCModel:")
Base.print_matrix(io, M.Σ)
end
Base.broadcastable(M::LCModel) = Ref(M)
"""
dim(M::LCModel)
Returns the dimension of the model.
"""
HC.dim(M::LCModel) = length(M.B)
"""
toeplitz(n::Integer)
Returns a symmetric `n×n` toeplitz matrix.
"""
function toeplitz(n::Integer)
HC.@var θ[1:n]
sum(0:n-1) do i
if i == 0
θ[1] .* diagm(0 => ones(Int, n))
else
θ[i+1] .* (diagm(i => ones(Int, n - i)) + diagm(-i => ones(Int, n - i)))
end
end |> LCModel
end
"""
tree(n, id::String)
Get the covariance matrix corresponding to the tree with the given `id` on `n` leaves.
Returns `nothing` if the tree was not found.
## Example
```
julia> tree(4, "{{1, 2}, {3, 4}}")
4×4 Array{PolyVar{true},2}:
t₁ t₅ t₇ t₇
t₅ t₂ t₇ t₇
t₇ t₇ t₃ t₆
t₇ t₇ t₆ t₄
```
"""
function tree(n::Integer, id::String)
4 ≤ n ≤ 7 || throw(ArgumentError("Only trees with 4 to 7 leaves are supported."))
for data in TREE_DATA
if data.n == n && data.id == id
return make_tree(data.tree)
end
end
nothing
end
function make_tree(tree::Matrix{Symbol})
var_names = sort(unique(vec(tree)))
D = Dict(map(v -> (v, HC.Variable(v)), var_names))
LCModel(map(v -> D[v], tree))
end
"""
trees(n)
Return all trees with `n` leaves as a tuple (id, tree).
"""
function trees(n::Int)
4 ≤ n ≤ 7 || throw(ArgumentError("Only trees with 4 to 7 leaves are supported."))
map(d -> (id = d.id, tree = make_tree(d.tree)), filter(d -> d.n == n, TREE_DATA))
end
"""
generic_subspace(n::Integer, m::Integer); pos_def::Bool=true)
Generate a generic family of symmetric ``n×n`` matrices living in an ``m``-dimensional
subspace. If `pos_def` is `true` then positive definite matrices are used as a basis.
"""
function generic_subspace(n::Integer, m::Integer; pos_def::Bool = true)
m ≤ binomial(n + 1, 2) ||
throw(ArgumentError("`m=$m` is larger than the dimension of the space."))
HC.@var θ[1:m]
if pos_def
LCModel(sum(θᵢ .* rand_pos_def(n) for θᵢ in θ))
else
LCModel(sum(θᵢ .* Symmetric(randn(n, n)) for θᵢ in θ))
end
end
"""
generic_diagonal(n::Integer, m::Integer)
Generate a generic family of ``n×n`` diagonal matrices living in an ``m``-dimensional
subspace.
"""
function generic_diagonal(n::Integer, m::Integer)
m ≤ n || throw(ArgumentError("`m=$m` is larger than the dimension of the space."))
HC.@var θ[1:m]
LCModel(sum(θᵢ .* diagm(0 => randn(n)) for θᵢ in θ))
end
"""
mle_system(M::LCModel)
Generate the MLE system corresponding to the family of covariances matrices
parameterized by `Σ`.
Returns the named tuple `(system, variables, parameters)`.
"""
function mle_system(M::LCModel)
Σ = M.Σ
θ = HC.variables(vec(Σ))
m = HC.nvariables(θ)
n = size(Σ, 1)
N = binomial(n + 1, 2)
HC.@var k[1:N] s[1:N]
K, S = vec_to_sym(k), vec_to_sym(s)
l = -tr(K * Σ) + tr(S * K * Σ * K)
∇l = HC.differentiate(l, θ)
KΣ_I = vec(K * Σ - Matrix(I, n, n))
HC.System([∇l; KΣ_I], variables = [θ; k], parameters = s)
end
"""
dual_mle_system(M::LCModel)
Generate the dual MLE system corresponding to the family of covariances matrices
parameterized by `Σ`.
Returns the named tuple `(system, variables, parameters)`.
"""
function dual_mle_system(M::LCModel)
Σ = M.Σ
θ = HC.variables(vec(Σ))
m = HC.nvariables(θ)
n = size(Σ, 1)
N = binomial(n + 1, 2)
HC.@var k[1:N] s[1:N]
K, S = vec_to_sym(k), vec_to_sym(s)
l = -tr(K * Σ) + tr(S * Σ)
∇l = HC.differentiate(l, θ)
KΣ_I = vec(K * Σ - Matrix(I, n, n))
HC.System([∇l; KΣ_I], variables = [θ; k], parameters = s)
end
"""
mle_system_and_start_pair(M::LCModel)
Generate the mle_system and a corresponding start pair `(x₀,p₀)`.
"""
function mle_system_and_start_pair(M::LCModel)
system = mle_system(M)
θ = HC.variables(vec(M.Σ))
p = HC.parameters(system)
θ₀ = randn(ComplexF64, length(θ))
Σ₀ = HC.evaluate(M.Σ, θ => θ₀)
K₀ = inv(Σ₀)
x₀ = [θ₀; sym_to_vec(K₀)]
exprs = system(x₀, p)[1:end-length(K₀)]
p₀, _ = HC.find_start_pair(HC.System(exprs, p); compile = false)
(system = system, x₀ = x₀, p₀ = p₀)
end
"""
dual_mle_system_and_start_pair(M::LCModel)
Generate the dual MLE system and a corresponding start pair `(x₀,p₀)`.
"""
function dual_mle_system_and_start_pair(M::LCModel)
system = dual_mle_system(M)
θ = HC.variables(vec(M.Σ))
p = HC.parameters(system)
θ₀ = randn(ComplexF64, length(θ))
Σ₀ = HC.evaluate(M.Σ, θ => θ₀)
K₀ = inv(Σ₀)
x₀ = [θ₀; sym_to_vec(K₀)]
exprs = system(x₀, p)[1:end-length(K₀)]
p₀, _ = HC.find_start_pair(HC.System(exprs, p); compile = false)
(system = system, x₀ = x₀, p₀ = p₀)
end
"""
MLDegreeWitness
Data structure holding an MLE model. This also holds a set of solutions for a generic instance,
which we call a witness.
"""
struct MLDegreeWitness{T1,T2,V<:AbstractVector}
model::LCModel{T1,T2}
solutions::Vector{V}
p::Vector{ComplexF64}
dual::Bool
end
function MLDegreeWitness(Σ::AbstractMatrix, solutions, p, dual)
MLDegreeWitness(LCModel(Σ), solutions, p, dual)
end
function Base.show(io::IO, R::MLDegreeWitness)
println(io, "MLDegreeWitness:")
println(io, " • ML degree → $(length(R.solutions))")
println(io, " • model dimension → $(dim(model(R)))")
println(io, " • dual → $(R.dual)")
end
"""
model(W::MLDegreeWitness)
Obtain the model corresponding to the `MLDegreeWitness` `W`.
"""
model(R::MLDegreeWitness) = R.model
"""
solutions(W::MLDegreeWitness)
Obtain the witness solutions corresponding to the `MLDegreeWitness` `W`
with given parameters.
"""
solutions(W::MLDegreeWitness) = W.solutions
"""
parameters(W::MLDegreeWitness)
Obtain the parameters of the `MLDegreeWitness` `W`.
"""
parameters(W::MLDegreeWitness) = W.p
"""
is_dual(W::MLDegreeWitness)
Indicates whether `W` is a witness for the dual MLE.
"""
is_dual(W::MLDegreeWitness) = W.dual
"""
ml_degree(W::MLDegreeWitness)
Returns the ML degree.
"""
ml_degree(W::MLDegreeWitness) = length(solutions(W))
"""
ml_degree_witness(Σ::LCModel; ml_degree=nothing, max_tries=5, dual=false, compile = false)
Compute a [`MLDegreeWitness`](@ref) for a given model Σ. If the ML degree is already
known it can be provided to stop the computations early. The stopping criterion is based
on a heuristic, `max_tries` indicates how many different parameters are tried a most until
an agreement is found.
"""
ml_degree_witness(Σ; kwargs...) = ml_degree_witness(LCModel(Σ); kwargs...)
function ml_degree_witness(
M::LCModel;
ml_degree = nothing,
max_tries = 5,
dual = false,
compile = false,
)
if dual
F, x₀, p₀ = dual_mle_system_and_start_pair(M)
else
F, x₀, p₀ = mle_system_and_start_pair(M)
end
result =
HC.monodromy_solve(F, x₀, p₀; target_solutions_count = ml_degree, compile = compile)
if HC.nsolutions(result) == ml_degree
return MLDegreeWitness(M, HC.solutions(result), HC.parameters(result), dual)
end
best_result = result
result_agreed = false
for i = 1:max_tries
q₀ = randn(ComplexF64, length(p₀))
S_q₀ = HC.solutions(HC.solve(
F,
HC.solutions(result);
start_parameters = HC.parameters(result),
target_parameters = q₀,
compile = compile,
))
new_result =
HC.monodromy_solve(F, S_q₀, q₀; compile = compile, max_loops_no_progress = 3)
if HC.nsolutions(new_result) == HC.nsolutions(best_result)
result_agreed = true
break
elseif HC.nsolutions(new_result) > HC.nsolutions(best_result)
best_result = new_result
end
end
MLDegreeWitness(M, HC.solutions(best_result), HC.parameters(best_result), dual)
end
"""
verify(W::MLDegreeWitness; trace_tol=1e-5, options...)
Tries to verify that the computed ML degree witness is complete, i.e., that
the ML degree is correct. This uses the [`verify_solution_completeness`](https://www.juliahomotopycontinuation.org/HomotopyContinuation.jl/stable/monodromy/#HomotopyContinuation.verify_solution_completeness)
of HomotopyContinuation.jl. All caveats mentioned there apply.
The `options` are also passed to `verify_solution_completeness`.
"""
function verify(W::MLDegreeWitness; trace_tol = 1e-5, compile = false, kwargs...)
if W.dual
F = dual_mle_system(model(W))
else
F = mle_system(model(W))
end
HC.verify_solution_completeness(
F,
solutions(W),
parameters(W);
trace_tol = trace_tol,
monodromy_options = (compile = compile,),
parameter_homotopy_options = (compile = compile,),
kwargs...,
)
end
"""
critical_points(W::MLDegreeWitness, S::AbstractMatrix;
only_positive_definite=true, only_non_negative=false,
options...)
Compute all critical points to the MLE problem of `W` for the given sample covariance matrix
`S`. If `only_positive_definite` is `true` only positive definite solutions are considered.
If `only_non_negative` is `true` only non-negative solutions are considered.
The `options` are argument passed to the [`solve`](https://www.juliahomotopycontinuation.org/HomotopyContinuation.jl/stable/solving/#HomotopyContinuation.solve) routine from `HomotopyContinuation.jl`.
"""
function critical_points(
W::MLDegreeWitness,
S::AbstractMatrix;
only_positive_definite = true,
only_non_negative = false,
compile = false,
kwargs...,
)
issymmetric(S) ||
throw("Sample covariance matrix `S` is not symmetric. Consider wrapping it in `Symmetric(S)` to enforce symmetry.")
if W.dual
F = dual_mle_system(model(W))
else
F = mle_system(model(W))
end
result = HC.solve(
F,
solutions(W);
start_parameters = W.p,
target_parameters = sym_to_vec(S),
compile = compile,
kwargs...,
)
m = size(model(W), 2)
θs = map(s -> s[1:m], HC.real_solutions(result))
if only_positive_definite
filter!(θs) do θ
isposdef(covariance_matrix(model(W), θ))
end
end
if only_non_negative
filter!(θs) do θ
all(covariance_matrix(model(W), θ) .≥ 0)
end
end
res = map(θs) do θ
(θ, logl(model(W), θ, S), classify_point(model(W), θ, S))
end
if !isempty(res)
best_val = maximum(θs)
for i = 1:length(res)
if first(res[i]) == best_val && res[i][3] == :local_maximum
res[i] = (res[i][1], res[i][2], :global_maximum)
end
end
end
res
end
"""
covariance_matrix(M::LCModel, θ)
Compute the covariance matrix corresponding to the value of `θ` and the given model
`M`.
"""
covariance_matrix(W::MLDegreeWitness, θ) = covariance_matrix(model(W), θ)
covariance_matrix(M::LCModel, θ) = sum(θ[i] * M.B[i] for i = 1:size(M, 2))
"""
logl(M::LCModel, θ, S::AbstractMatrix)
Evaluate the log-likelihood ``log(det(Σ⁻¹)) - tr(SΣ⁻¹)`` of the MLE problem.
"""
function logl(M::LCModel, θ, S::AbstractMatrix)
logl(covariance_matrix(M, θ), S)
end
logl(Σ::AbstractMatrix, S::AbstractMatrix) = -logdet(Σ) - tr(S * inv(Σ))
"""
gradient_logl(M::LCModel, θ, S::AbstractMatrix)
Evaluate the gradient of the log-likelihood ``log(det(Σ⁻¹)) - tr(SΣ⁻¹)`` of the MLE problem.
"""
gradient_logl(M::LCModel, θ, S::AbstractMatrix) = gradient_logl(M.B, θ, S)
function gradient_logl(B::Vector{<:Matrix}, θ, S::AbstractMatrix)
Σ = sum(θ[i] * B[i] for i = 1:length(B))
Σ⁻¹ = inv(Σ)
map(1:length(B)) do i
-tr(Σ⁻¹ * B[i]) + tr(S * Σ⁻¹ * B[i] * Σ⁻¹)
end
end
"""
hessian_logl(M::LCModel, θ, S::AbstractMatrix)
Evaluate the hessian of the log-likelihood ``log(det(Σ⁻¹)) - tr(SΣ⁻¹)`` of the MLE problem.
"""
hessian_logl(M::LCModel, θ, S::AbstractMatrix) = hessian_logl(M.B, θ, S)
function hessian_logl(B::Vector{<:Matrix}, θ, S::AbstractMatrix)
m = length(B)
Σ = sum(θ[i] * B[i] for i = 1:m)
Σ⁻¹ = inv(Σ)
H = zeros(eltype(Σ), m, m)
for i = 1:m, j = i:m
kernel = Σ⁻¹ * B[i] * Σ⁻¹ * B[j]
H[i, j] = H[j, i] = tr(kernel) - 2tr(S * kernel * Σ⁻¹)
end
Symmetric(H)
end
"""
classify_point(M::LCModel, θ, S::AbstractMatrix)
Classify the critical point `θ` of the log-likelihood function.
"""
function classify_point(M::LCModel, θ, S::AbstractMatrix)
H = hessian_logl(M, θ, S)
emin, emax = extrema(eigvals(H))
if emin < 0 && emax < 0
:local_maximum
elseif emin > 0 && emax > 0
:local_minimum
else
:saddle_point
end
end
"""
mle(W::MLDegreeWitness, S::AbstractMatrix; only_positive_definite=true, only_positive=false)
Compute the MLE for the matrix `S` using the MLDegreeWitness `W`.
Returns the parameters for the MLE covariance matrix or `nothing` if no solution was found
satisfying the constraints (see options below).
## Options
* `only_positive_definite`: controls whether only positive definite
covariance matrices should be considered.
* `only_positive`: controls whether only (entrywise) positive covariance matrices
should be considered.
"""
function mle(
W::MLDegreeWitness,
S::AbstractMatrix;
only_positive_definite = true,
only_positive = false,
kwargs...,
)
is_dual(W) &&
throw(ArgumentError("`mle` is currently only supported for MLE not dual MLE."))
results = critical_points(W, S; kwargs...)
ind = findfirst(r -> r[3] == :global_maximum, results)
isnothing(ind) ? nothing : results[ind][1]
end
end # module
| LinearCovarianceModels | https://github.com/saschatimme/LinearCovarianceModels.jl.git |
|
[
"MIT"
] | 0.2.2 | ecdb4458557d6a4ab428e1f3dbfc23438ee7a604 | code | 30751 | const TREE_DATA = let
t = [Symbol("t$i") for i in 1:11]
[
(n = 3,
id = "{1,2}",orbit_size=3,
tree = [t[1] t[4] t[5];
t[4] t[2] t[5];
t[5] t[5] t[3]]),
# 4
#1,2,10
(n = 4,
id = "{1,2,3}",orbit_size=4,
tree = [t[1] t[5] t[5] t[6];
t[5] t[2] t[5] t[6];
t[5] t[5] t[3] t[6];
t[6] t[6] t[6] t[4]]),
(n = 4,
id = "{1,2}",orbit_size=6,
tree = [t[1] t[5] t[6] t[6];
t[5] t[2] t[6] t[6];
t[6] t[6] t[3] t[6];
t[6] t[6] t[6] t[4]]),
(n = 4,
id = "{1,2},{3,4}",orbit_size=3,
tree = [t[1] t[5] t[7] t[7];
t[5] t[2] t[7] t[7];
t[7] t[7] t[3] t[6];
t[7] t[7] t[6] t[4]]),
(n = 4,
id = "{1,2},{1,2,3}",orbit_size=12,
tree = [t[1] t[5] t[6] t[7];
t[5] t[2] t[6] t[7];
t[6] t[6] t[3] t[7];
t[7] t[7] t[7] t[4]]),
#1,3,25
(n=5,
id="{1,2,3,4}",orbit_size=5,
tree=[t[1] t[6] t[6] t[6] t[7];
t[6] t[2] t[6] t[6] t[7];
t[6] t[6] t[3] t[6] t[7];
t[6] t[6] t[6] t[4] t[7];
t[7] t[7] t[7] t[7] t[5]]),
(n=5,
id="{1,2}",orbit_size=10,
tree=[t[1] t[6] t[7] t[7] t[7];
t[6] t[2] t[7] t[7] t[7];
t[7] t[7] t[3] t[7] t[7];
t[7] t[7] t[7] t[4] t[7];
t[7] t[7] t[7] t[7] t[5]]),
(n=5,
id="{1,2,3}",orbit_size=10,
tree=[t[1] t[6] t[6] t[7] t[7];
t[6] t[2] t[6] t[7] t[7];
t[6] t[6] t[3] t[7] t[7];
t[7] t[7] t[7] t[4] t[7];
t[7] t[7] t[7] t[7] t[5]]),
# 2,5,105
(n=5,
id="{1,2},{3,4,5}",orbit_size=10,
tree=[t[1] t[6] t[8] t[8] t[8];
t[6] t[2] t[8] t[8] t[8];
t[8] t[8] t[3] t[7] t[7];
t[8] t[8] t[7] t[4] t[7];
t[8] t[8] t[7] t[7] t[5]]),
(n=5,
id="{1,2},{3,4}",orbit_size=15,
tree=[t[1] t[6] t[8] t[8] t[8];
t[6] t[2] t[8] t[8] t[8];
t[8] t[8] t[3] t[7] t[8];
t[8] t[8] t[7] t[4] t[8];
t[8] t[8] t[8] t[8] t[5]]),
(n=5,
id="{1,2,3},{1,2,3,4}",orbit_size=20,
tree=[t[1] t[6] t[6] t[7] t[8];
t[6] t[2] t[6] t[7] t[8];
t[6] t[6] t[3] t[7] t[8];
t[7] t[7] t[7] t[4] t[8];
t[8] t[8] t[8] t[8] t[5]]),
(n=5,
id="{1,2},{1,2,3}",orbit_size=30,
tree=[t[1] t[6] t[7] t[8] t[8];
t[6] t[2] t[7] t[8] t[8];
t[7] t[7] t[3] t[8] t[8];
t[8] t[8] t[8] t[4] t[8];
t[8] t[8] t[8] t[8] t[5]]),
(n=5,
id="{1,2},{1,2,3,4}",orbit_size=30,
tree=[t[1] t[6] t[7] t[7] t[8];
t[6] t[2] t[7] t[7] t[8];
t[7] t[7] t[3] t[7] t[8];
t[7] t[7] t[7] t[4] t[8];
t[8] t[8] t[8] t[8] t[5]]),
# 3,3,105
(n=5,
id="{1,2},{3,4},{1,2,3,4}",orbit_size=15,
tree=[t[1] t[6] t[8] t[8] t[9];
t[6] t[2] t[8] t[8] t[9];
t[8] t[8] t[3] t[7] t[9];
t[8] t[8] t[7] t[4] t[9];
t[9] t[9] t[9] t[9] t[5]]),
(n=5,
id="{1,2},{3,4},{1,2,5}",orbit_size=30,
tree=[t[1] t[6] t[9] t[9] t[8];
t[6] t[2] t[9] t[9] t[8];
t[9] t[9] t[3] t[7] t[9];
t[9] t[9] t[7] t[4] t[9];
t[8] t[8] t[9] t[9] t[5]]),
(n=5,
id="{1,2},{1,2,3},{1,2,3,4}",orbit_size=60,
tree=[t[1] t[6] t[7] t[8] t[9];
t[6] t[2] t[7] t[8] t[9];
t[7] t[7] t[3] t[8] t[9];
t[8] t[8] t[8] t[4] t[9];
t[9] t[9] t[9] t[9] t[5]]),
# 1.726
#
# 6
#
# 1,4,56
(n=6,
id="{1,2,3,4,5}",orbit_size=6,
tree=[t[1] t[7] t[7] t[7] t[7] t[8];
t[7] t[2] t[7] t[7] t[7] t[8];
t[7] t[7] t[3] t[7] t[7] t[8];
t[7] t[7] t[7] t[4] t[7] t[8];
t[7] t[7] t[7] t[7] t[5] t[8];
t[8] t[8] t[8] t[8] t[8] t[6]]),
(n=6,
id="{1,2}",orbit_size=15,
tree=[t[1] t[7] t[8] t[8] t[8] t[8];
t[7] t[2] t[8] t[8] t[8] t[8];
t[8] t[8] t[3] t[8] t[8] t[8];
t[8] t[8] t[8] t[4] t[8] t[8];
t[8] t[8] t[8] t[8] t[5] t[8];
t[8] t[8] t[8] t[8] t[8] t[6]]),
(n=6,
id="{1,2,3,4}",orbit_size=15,
tree=[t[1] t[7] t[7] t[7] t[8] t[8];
t[7] t[2] t[7] t[7] t[8] t[8];
t[7] t[7] t[3] t[7] t[8] t[8];
t[7] t[7] t[7] t[4] t[8] t[8];
t[8] t[8] t[8] t[8] t[5] t[8];
t[8] t[8] t[8] t[8] t[8] t[6]]),
(n=6,
id="{1,2,3}",orbit_size=20,
tree=[t[1] t[7] t[7] t[8] t[8] t[8];
t[7] t[2] t[7] t[8] t[8] t[8];
t[7] t[7] t[3] t[8] t[8] t[8];
t[8] t[8] t[8] t[4] t[8] t[8];
t[8] t[8] t[8] t[8] t[5] t[8];
t[8] t[8] t[8] t[8] t[8] t[6]]),
# 2,10,490
(n=6,
id="{1,2,3},{4,5,6}",orbit_size=10,
tree=[t[1] t[7] t[7] t[9] t[9] t[9];
t[7] t[2] t[7] t[9] t[9] t[9];
t[7] t[7] t[3] t[9] t[9] t[9];
t[9] t[9] t[9] t[4] t[8] t[8];
t[9] t[9] t[9] t[8] t[5] t[8];
t[9] t[9] t[9] t[8] t[8] t[6]]),
(n=6,
id="{1,2},{3,4,5,6}",orbit_size=15,
tree=[t[1] t[7] t[9] t[9] t[9] t[9];
t[7] t[2] t[9] t[9] t[9] t[9];
t[9] t[9] t[3] t[8] t[8] t[8];
t[9] t[9] t[8] t[4] t[8] t[8];
t[9] t[9] t[8] t[8] t[5] t[8];
t[9] t[9] t[8] t[8] t[8] t[6]]),
(n=6,
id="{1,2,3,4},{1,2,3,4,5}",orbit_size=30,
tree=[t[1] t[7] t[7] t[7] t[8] t[9];
t[7] t[2] t[7] t[7] t[8] t[9];
t[7] t[7] t[3] t[7] t[8] t[9];
t[7] t[7] t[7] t[4] t[8] t[9];
t[8] t[8] t[8] t[8] t[5] t[9];
t[9] t[9] t[9] t[9] t[9] t[6]]),
(n=6,
id="{1,2},{3,4}",orbit_size=45,
tree=[t[1] t[7] t[9] t[9] t[9] t[9];
t[7] t[2] t[9] t[9] t[9] t[9];
t[9] t[9] t[3] t[8] t[9] t[9];
t[9] t[9] t[8] t[4] t[9] t[9];
t[9] t[9] t[9] t[9] t[5] t[9];
t[9] t[9] t[9] t[9] t[9] t[6]]),
(n=6,
id="{1,2},{1,2,3}",orbit_size=60,
tree=[t[1] t[7] t[8] t[9] t[9] t[9];
t[7] t[2] t[8] t[9] t[9] t[9];
t[8] t[8] t[3] t[9] t[9] t[9];
t[9] t[9] t[9] t[4] t[9] t[9];
t[9] t[9] t[9] t[9] t[5] t[9];
t[9] t[9] t[9] t[9] t[9] t[6]]),
(n=6,
id="{1,2},{3,4,5}",orbit_size=60,
tree=[t[1] t[7] t[9] t[9] t[9] t[9];
t[7] t[2] t[9] t[9] t[9] t[9];
t[9] t[9] t[3] t[8] t[8] t[9];
t[9] t[9] t[8] t[4] t[8] t[9];
t[9] t[9] t[8] t[8] t[5] t[9];
t[9] t[9] t[9] t[9] t[9] t[6]]),
(n=6,
id="{1,2},{1,2,3,4,5}",orbit_size=60,
tree=[t[1] t[7] t[8] t[8] t[8] t[9];
t[7] t[2] t[8] t[8] t[8] t[9];
t[8] t[8] t[3] t[8] t[8] t[9];
t[8] t[8] t[8] t[4] t[8] t[9];
t[8] t[8] t[8] t[8] t[5] t[9];
t[9] t[9] t[9] t[9] t[9] t[6]]),
(n=6,
id="{1,2,3},{1,2,3,4}",orbit_size=60,
tree=[t[1] t[7] t[7] t[8] t[9] t[9];
t[7] t[2] t[7] t[8] t[9] t[9];
t[7] t[7] t[3] t[8] t[9] t[9];
t[8] t[8] t[8] t[4] t[9] t[9];
t[9] t[9] t[9] t[9] t[5] t[9];
t[9] t[9] t[9] t[9] t[9] t[6]]),
(n=6,
id="{1,2,3},{1,2,3,4,5}",orbit_size=60,
tree=[t[1] t[7] t[7] t[8] t[8] t[9];
t[7] t[2] t[7] t[8] t[8] t[9];
t[7] t[7] t[3] t[8] t[8] t[9];
t[8] t[8] t[8] t[4] t[8] t[9];
t[8] t[8] t[8] t[8] t[5] t[9];
t[9] t[9] t[9] t[9] t[9] t[6]]),
(n=6,
id="{1,2},{1,2,3,4}",orbit_size=90,
tree=[t[1] t[7] t[8] t[8] t[9] t[9];
t[7] t[2] t[8] t[8] t[9] t[9];
t[8] t[8] t[3] t[8] t[9] t[9];
t[8] t[8] t[8] t[4] t[9] t[9];
t[9] t[9] t[9] t[9] t[5] t[9];
t[9] t[9] t[9] t[9] t[9] t[6]]),
# 3,12,1260
(n=6,
id="{1,2},{3,4},{5,6}",orbit_size=15,
tree=[t[1] t[7] t[10] t[10] t[10] t[10];
t[7] t[2] t[10] t[10] t[10] t[10];
t[10] t[10] t[3] t[8] t[10] t[10];
t[10] t[10] t[8] t[4] t[10] t[10];
t[10] t[10] t[10] t[10] t[5] t[9] ;
t[10] t[10] t[10] t[10] t[9] t[6]]),
(n=6,
id="{1,2},{3,4},{1,2,3,4}",orbit_size=45,
tree=[t[1] t[7] t[9] t[9] t[10] t[10];
t[7] t[2] t[9] t[9] t[10] t[10];
t[9] t[9] t[3] t[8] t[10] t[10];
t[9] t[9] t[8] t[4] t[10] t[10];
t[10] t[10] t[10] t[10] t[5] t[10];
t[10] t[10] t[10] t[10] t[10] t[6]]),
(n=6,
id="{1,2},{1,2,3},{4,5,6}",orbit_size=60,
tree=[t[1] t[7] t[8] t[10] t[10] t[10];
t[7] t[2] t[8] t[10] t[10] t[10];
t[8] t[8] t[3] t[10] t[10] t[10];
t[10] t[10] t[10] t[4] t[9] t[9] ;
t[10] t[10] t[10] t[9] t[5] t[9] ;
t[10] t[10] t[10] t[9] t[9] t[6]]),
(n=6,
id="{1,2},{3,4,5},{3,4,5,6}",orbit_size=60,
tree=[t[1] t[7] t[10] t[10] t[10] t[10];
t[7] t[2] t[10] t[10] t[10] t[10];
t[10] t[10] t[3] t[8] t[8] t[9] ;
t[10] t[10] t[8] t[4] t[8] t[9] ;
t[10] t[10] t[8] t[8] t[5] t[9] ;
t[10] t[10] t[9] t[9] t[9] t[6]]),
(n=6,
id="{1,2},{3,4,5},{1,2,3,4,5}",orbit_size=60,
tree=[t[1] t[7] t[9] t[9] t[9] t[10];
t[7] t[2] t[9] t[9] t[9] t[10];
t[9] t[9] t[3] t[8] t[8] t[10];
t[9] t[9] t[8] t[4] t[8] t[10];
t[9] t[9] t[8] t[8] t[5] t[10];
t[10] t[10] t[10] t[10] t[10] t[6]]),
(n=6,
id="{1,2},{3,4},{1,2,5,6}",orbit_size=90,
tree=[t[1] t[7] t[10] t[10] t[9] t[9] ;
t[7] t[2] t[10] t[10] t[9] t[9] ;
t[10] t[10] t[3] t[8] t[10] t[10];
t[10] t[10] t[8] t[4] t[10] t[10];
t[9] t[9] t[10] t[10] t[5] t[9] ;
t[9] t[9] t[10] t[10] t[9] t[6]]),
(n=6,
id="{1,2},{3,4},{1,2,3,4,5}",orbit_size=90,
tree=[t[1] t[7] t[9] t[9] t[9] t[10];
t[7] t[2] t[9] t[9] t[9] t[10];
t[9] t[9] t[3] t[8] t[9] t[10];
t[9] t[9] t[8] t[4] t[9] t[10];
t[9] t[9] t[9] t[9] t[5] t[10];
t[10] t[10] t[10] t[10] t[10] t[6]]),
(n=6,
id="{1,2,3},{1,2,3,4},{1,2,3,4,5}",orbit_size=120,
tree=[t[1] t[7] t[7] t[8] t[9] t[10];
t[7] t[2] t[7] t[8] t[9] t[10];
t[7] t[7] t[3] t[8] t[9] t[10];
t[8] t[8] t[8] t[4] t[9] t[10];
t[9] t[9] t[9] t[9] t[5] t[10];
t[10] t[10] t[10] t[10] t[10] t[6]]),
(n=6,
id="{1,2},{3,4},{1,2,5}",orbit_size=180,
tree=[t[1] t[7] t[10] t[10] t[9] t[10];
t[7] t[2] t[10] t[10] t[9] t[10];
t[10] t[10] t[3] t[8] t[10] t[10];
t[10] t[10] t[8] t[4] t[10] t[10];
t[9] t[9] t[10] t[10] t[5] t[10];
t[10] t[10] t[10] t[10] t[10] t[6]]),
(n=6,
id="{1,2},{1,2,3},{1,2,3,4}",orbit_size=180,
tree=[t[1] t[7] t[8] t[9] t[10] t[10];
t[7] t[2] t[8] t[9] t[10] t[10];
t[8] t[8] t[3] t[9] t[10] t[10];
t[9] t[9] t[9] t[4] t[10] t[10];
t[10] t[10] t[10] t[10] t[5] t[10];
t[10] t[10] t[10] t[10] t[10] t[6]]),
(n=6,
id="{1,2},{1,2,3},{1,2,3,4,5}",orbit_size=180,
tree=[t[1] t[7] t[8] t[9] t[9] t[10];
t[7] t[2] t[8] t[9] t[9] t[10];
t[8] t[8] t[3] t[9] t[9] t[10];
t[9] t[9] t[9] t[4] t[9] t[10];
t[9] t[9] t[9] t[9] t[5] t[10];
t[10] t[10] t[10] t[10] t[10] t[6]]),
(n=6,
id="{1,2},{1,2,3,4},{1,2,3,4,5}",orbit_size=180,
tree=[t[1] t[7] t[8] t[8] t[9] t[10];
t[7] t[2] t[8] t[8] t[9] t[10];
t[8] t[8] t[3] t[8] t[9] t[10];
t[8] t[8] t[8] t[4] t[9] t[10];
t[9] t[9] t[9] t[9] t[5] t[10];
t[10] t[10] t[10] t[10] t[10] t[6]]),
#4,6,945
(n=6,
id="{1,2},{3,4},{5,6},{1,2,3,4}",orbit_size=45,
tree=[t[1] t[7] t[10] t[10] t[11] t[11];
t[7] t[2] t[10] t[10] t[11] t[11];
t[10] t[10] t[3] t[8] t[11] t[11];
t[10] t[10] t[8] t[4] t[11] t[11];
t[11] t[11] t[11] t[11] t[5] t[9] ;
t[11] t[11] t[11] t[11] t[9] t[6]]),
(n=6,
id="{1,2},{3,4},{1,2,5},{3,4,6}",orbit_size=90,
tree=[t[1] t[7] t[11] t[11] t[9] t[11];
t[7] t[2] t[11] t[11] t[9] t[11];
t[11] t[11] t[3] t[8] t[11] t[10];
t[11] t[11] t[8] t[4] t[11] t[10];
t[9] t[9] t[11] t[11] t[5] t[11];
t[11] t[11] t[10] t[10] t[11] t[6]]),
(n=6,
id="{1,2},{3,4},{1,2,3,4},{1,2,3,4,5}",orbit_size=90,
tree=[t[1] t[7] t[9] t[9] t[10] t[11];
t[7] t[2] t[9] t[9] t[10] t[11];
t[9] t[9] t[3] t[8] t[10] t[11];
t[9] t[9] t[8] t[4] t[10] t[11];
t[10] t[10] t[10] t[10] t[5] t[11];
t[11] t[11] t[11] t[11] t[11] t[6]]),
(n=6,
id="{1,2},{3,4},{1,2,5},{1,2,5,6}",orbit_size=180,
tree=[t[1] t[7] t[11] t[11] t[9] t[10];
t[7] t[2] t[11] t[11] t[9] t[10];
t[11] t[11] t[3] t[8] t[11] t[11];
t[11] t[11] t[8] t[4] t[11] t[11];
t[9] t[9] t[11] t[11] t[5] t[10];
t[10] t[10] t[11] t[11] t[10] t[6]]),
(n=6,
id="{1,2},{3,4},{1,2,5},{1,2,3,4,5}",orbit_size=180,
tree=[t[1] t[7] t[10] t[10] t[9] t[11];
t[7] t[2] t[10] t[10] t[9] t[11];
t[10] t[10] t[3] t[8] t[10] t[11];
t[10] t[10] t[8] t[4] t[10] t[11];
t[9] t[9] t[10] t[10] t[5] t[11];
t[11] t[11] t[11] t[11] t[11] t[6]]),
(n=6,
id="{1,2},{1,2,3},{1,2,3,4},{1,2,3,4,5}",orbit_size=360,
tree=[t[1] t[7] t[8] t[9] t[10] t[11];
t[7] t[2] t[8] t[9] t[10] t[11];
t[8] t[8] t[3] t[9] t[10] t[11];
t[9] t[9] t[9] t[4] t[10] t[11];
t[10] t[10] t[10] t[10] t[5] t[11];
t[11] t[11] t[11] t[11] t[11] t[6]]),
# 215.768
#
# 7
#
# 1,5,119
(n=7,
id="{1,2,3,4,5,6}",orbit_size=7,
tree=[t[1] t[8] t[8] t[8] t[8] t[8] t[9];
t[8] t[2] t[8] t[8] t[8] t[8] t[9];
t[8] t[8] t[3] t[8] t[8] t[8] t[9];
t[8] t[8] t[8] t[4] t[8] t[8] t[9];
t[8] t[8] t[8] t[8] t[5] t[8] t[9];
t[8] t[8] t[8] t[8] t[8] t[6] t[9];
t[9] t[9] t[9] t[9] t[9] t[9] t[7]]),
(n=7,
id="{1,2}",orbit_size=21,
tree=[t[1] t[8] t[9] t[9] t[9] t[9] t[9];
t[8] t[2] t[9] t[9] t[9] t[9] t[9];
t[9] t[9] t[3] t[9] t[9] t[9] t[9];
t[9] t[9] t[9] t[4] t[9] t[9] t[9];
t[9] t[9] t[9] t[9] t[5] t[9] t[9];
t[9] t[9] t[9] t[9] t[9] t[6] t[9];
t[9] t[9] t[9] t[9] t[9] t[9] t[7]]),
(n=7,
id="{1,2,3,4,5}",orbit_size=21,
tree=[t[1] t[8] t[8] t[8] t[8] t[9] t[9];
t[8] t[2] t[8] t[8] t[8] t[9] t[9];
t[8] t[8] t[3] t[8] t[8] t[9] t[9];
t[8] t[8] t[8] t[4] t[8] t[9] t[9];
t[8] t[8] t[8] t[8] t[5] t[9] t[9];
t[9] t[9] t[9] t[9] t[9] t[6] t[9];
t[9] t[9] t[9] t[9] t[9] t[9] t[7]]),
(n=7,
id="{1,2,3}",orbit_size=35,
tree=[t[1] t[8] t[8] t[9] t[9] t[9] t[9];
t[8] t[2] t[8] t[9] t[9] t[9] t[9];
t[8] t[8] t[3] t[9] t[9] t[9] t[9];
t[9] t[9] t[9] t[4] t[9] t[9] t[9];
t[9] t[9] t[9] t[9] t[5] t[9] t[9];
t[9] t[9] t[9] t[9] t[9] t[6] t[9];
t[9] t[9] t[9] t[9] t[9] t[9] t[7]]),
(n=7,
id="{1,2,3,4}",orbit_size=35,
tree=[t[1] t[8] t[8] t[8] t[9] t[9] t[9];
t[8] t[2] t[8] t[8] t[9] t[9] t[9];
t[8] t[8] t[3] t[8] t[9] t[9] t[9];
t[8] t[8] t[8] t[4] t[9] t[9] t[9];
t[9] t[9] t[9] t[9] t[5] t[9] t[9];
t[9] t[9] t[9] t[9] t[9] t[6] t[9];
t[9] t[9] t[9] t[9] t[9] t[9] t[7]]),
# 2,16,1918
(n=7,
id="{1,2},{3,4,5,6,7}",orbit_size=21,
tree=[t[1] t[8] t[10] t[10] t[10] t[10] t[10];
t[8] t[2] t[10] t[10] t[10] t[10] t[10];
t[10] t[10] t[3] t[9] t[9] t[9] t[9] ;
t[10] t[10] t[9] t[4] t[9] t[9] t[9] ;
t[10] t[10] t[9] t[9] t[5] t[9] t[9] ;
t[10] t[10] t[9] t[9] t[9] t[6] t[9] ;
t[10] t[10] t[9] t[9] t[9] t[9] t[7]]),
(n=7,
id="{1,2,3},{4,5,6,7}",orbit_size=35,
tree=[t[1] t[8] t[8] t[10] t[10] t[10] t[10];
t[8] t[2] t[8] t[10] t[10] t[10] t[10];
t[8] t[8] t[3] t[10] t[10] t[10] t[10];
t[10] t[10] t[10] t[4] t[9] t[9] t[9] ;
t[10] t[10] t[10] t[9] t[5] t[9] t[9] ;
t[10] t[10] t[10] t[9] t[9] t[6] t[9] ;
t[10] t[10] t[10] t[9] t[9] t[9] t[7]]),
(n=7,
id="{1,2,3,4,5},{1,2,3,4,5,6}",orbit_size=42,
tree=[t[1] t[8] t[8] t[8] t[8] t[9] t[10];
t[8] t[2] t[8] t[8] t[8] t[9] t[10];
t[8] t[8] t[3] t[8] t[8] t[9] t[10];
t[8] t[8] t[8] t[4] t[8] t[9] t[10];
t[8] t[8] t[8] t[8] t[5] t[9] t[10];
t[9] t[9] t[9] t[9] t[9] t[6] t[10];
t[10] t[10] t[10] t[10] t[10] t[10] t[7]]),
(n=7,
id="{1,2,3},{4,5,6}",orbit_size=70,
tree=[t[1] t[8] t[8] t[10] t[10] t[10] t[10];
t[8] t[2] t[8] t[10] t[10] t[10] t[10];
t[8] t[8] t[3] t[10] t[10] t[10] t[10];
t[10] t[10] t[10] t[4] t[9] t[9] t[10];
t[10] t[10] t[10] t[9] t[5] t[9] t[10];
t[10] t[10] t[10] t[9] t[9] t[6] t[10];
t[10] t[10] t[10] t[10] t[10] t[10] t[7]]),
(n=7,
id="{1,2},{3,4}",orbit_size=105,
tree=[t[1] t[8] t[10] t[10] t[10] t[10] t[10];
t[8] t[2] t[10] t[10] t[10] t[10] t[10];
t[10] t[10] t[3] t[9] t[10] t[10] t[10];
t[10] t[10] t[9] t[4] t[10] t[10] t[10];
t[10] t[10] t[10] t[10] t[5] t[10] t[10];
t[10] t[10] t[10] t[10] t[10] t[6] t[10];
t[10] t[10] t[10] t[10] t[10] t[10] t[7]]),
(n=7,
id="{1,2},{1,2,3}",orbit_size=105,
tree=[t[1] t[8] t[9] t[10] t[10] t[10] t[10];
t[8] t[2] t[9] t[10] t[10] t[10] t[10];
t[9] t[9] t[3] t[10] t[10] t[10] t[10];
t[10] t[10] t[10] t[4] t[10] t[10] t[10];
t[10] t[10] t[10] t[10] t[5] t[10] t[10];
t[10] t[10] t[10] t[10] t[10] t[6] t[10];
t[10] t[10] t[10] t[10] t[10] t[10] t[7]]),
(n=7,
id="{1,2},{3,4,5,6}",orbit_size=105,
tree=[t[1] t[8] t[10] t[10] t[10] t[10] t[10];
t[8] t[2] t[10] t[10] t[10] t[10] t[10];
t[10] t[10] t[3] t[9] t[9] t[9] t[10];
t[10] t[10] t[9] t[4] t[9] t[9] t[10];
t[10] t[10] t[9] t[9] t[5] t[9] t[10];
t[10] t[10] t[9] t[9] t[9] t[6] t[10];
t[10] t[10] t[10] t[10] t[10] t[10] t[7]]),
(n=7,
id="{1,2},{1,2,3,4,5,6}",orbit_size=105,
tree=[t[1] t[8] t[9] t[9] t[9] t[9] t[10];
t[8] t[2] t[9] t[9] t[9] t[9] t[10];
t[9] t[9] t[3] t[9] t[9] t[9] t[10];
t[9] t[9] t[9] t[4] t[9] t[9] t[10];
t[9] t[9] t[9] t[9] t[5] t[9] t[10];
t[9] t[9] t[9] t[9] t[9] t[6] t[10];
t[10] t[10] t[10] t[10] t[10] t[10] t[7]]),
(n=7,
id="{1,2,3,4},{1,2,3,4,5}",orbit_size=105,
tree=[t[1] t[8] t[8] t[8] t[9] t[10] t[10];
t[8] t[2] t[8] t[8] t[9] t[10] t[10];
t[8] t[8] t[3] t[8] t[9] t[10] t[10];
t[8] t[8] t[8] t[4] t[9] t[10] t[10];
t[9] t[9] t[9] t[9] t[5] t[10] t[10];
t[10] t[10] t[10] t[10] t[10] t[6] t[10];
t[10] t[10] t[10] t[10] t[10] t[10] t[7]]),
(n=7,
id="{1,2,3,4},{1,2,3,4,5,6}",orbit_size=105,
tree=[t[1] t[8] t[8] t[8] t[9] t[9] t[10];
t[8] t[2] t[8] t[8] t[9] t[9] t[10];
t[8] t[8] t[3] t[8] t[9] t[9] t[10];
t[8] t[8] t[8] t[4] t[9] t[9] t[10];
t[9] t[9] t[9] t[9] t[5] t[9] t[10];
t[9] t[9] t[9] t[9] t[9] t[6] t[10];
t[10] t[10] t[10] t[10] t[10] t[10] t[7]]),
(n=7,
id="{1,2,3},{1,2,3,4}",orbit_size=140,
tree=[t[1] t[8] t[8] t[9] t[10] t[10] t[10];
t[8] t[2] t[8] t[9] t[10] t[10] t[10];
t[8] t[8] t[3] t[9] t[10] t[10] t[10];
t[9] t[9] t[9] t[4] t[10] t[10] t[10];
t[10] t[10] t[10] t[10] t[5] t[10] t[10];
t[10] t[10] t[10] t[10] t[10] t[6] t[10];
t[10] t[10] t[10] t[10] t[10] t[10] t[7]]),
(n=7,
id="{1,2,3},{1,2,3,4,5,6}",orbit_size=140,
tree=[t[1] t[8] t[8] t[9] t[9] t[9] t[10];
t[8] t[2] t[8] t[9] t[9] t[9] t[10];
t[8] t[8] t[3] t[9] t[9] t[9] t[10];
t[9] t[9] t[9] t[4] t[9] t[9] t[10];
t[9] t[9] t[9] t[9] t[5] t[9] t[10];
t[9] t[9] t[9] t[9] t[9] t[6] t[10];
t[10] t[10] t[10] t[10] t[10] t[10] t[7]]),
(n=7,
id="{1,2},{3,4,5}",orbit_size=210,
tree=[t[1] t[8] t[10] t[10] t[10] t[10] t[10];
t[8] t[2] t[10] t[10] t[10] t[10] t[10];
t[10] t[10] t[3] t[9] t[9] t[10] t[10];
t[10] t[10] t[9] t[4] t[9] t[10] t[10];
t[10] t[10] t[9] t[9] t[5] t[10] t[10];
t[10] t[10] t[10] t[10] t[10] t[6] t[10];
t[10] t[10] t[10] t[10] t[10] t[10] t[7]]),
(n=7,
id="{1,2},{1,2,3,4}",orbit_size=210,
tree=[t[1] t[8] t[9] t[9] t[10] t[10] t[10];
t[8] t[2] t[9] t[9] t[10] t[10] t[10];
t[9] t[9] t[3] t[9] t[10] t[10] t[10];
t[9] t[9] t[9] t[4] t[10] t[10] t[10];
t[10] t[10] t[10] t[10] t[5] t[10] t[10];
t[10] t[10] t[10] t[10] t[10] t[6] t[10];
t[10] t[10] t[10] t[10] t[10] t[10] t[7]]),
(n=7,
id="{1,2},{1,2,3,4,5}",orbit_size=210,
tree=[t[1] t[8] t[9] t[9] t[9] t[10] t[10];
t[8] t[2] t[9] t[9] t[9] t[10] t[10];
t[9] t[9] t[3] t[9] t[9] t[10] t[10];
t[9] t[9] t[9] t[4] t[9] t[10] t[10];
t[9] t[9] t[9] t[9] t[5] t[10] t[10];
t[10] t[10] t[10] t[10] t[10] t[6] t[10];
t[10] t[10] t[10] t[10] t[10] t[10] t[7]]),
(n=7,
id="{1,2,3},{1,2,3,4,5}",orbit_size=210,
tree=[t[1] t[8] t[8] t[9] t[9] t[10] t[10];
t[8] t[2] t[8] t[9] t[9] t[10] t[10];
t[8] t[8] t[3] t[9] t[9] t[10] t[10];
t[9] t[9] t[9] t[4] t[9] t[10] t[10];
t[9] t[9] t[9] t[9] t[5] t[10] t[10];
t[10] t[10] t[10] t[10] t[10] t[6] t[10];
t[10] t[10] t[10] t[10] t[10] t[10] t[7]]),
# 3,29,9450
(n=7,
id="{1,2,3},{4,5,6},{1,2,3,4,5,6}",orbit_size=70,
tree=[t[1] t[8] t[8] t[10] t[10] t[10] t[11];
t[8] t[2] t[8] t[10] t[10] t[10] t[11];
t[8] t[8] t[3] t[10] t[10] t[10] t[11];
t[10] t[10] t[10] t[4] t[9] t[9] t[11];
t[10] t[10] t[10] t[9] t[5] t[9] t[11];
t[10] t[10] t[10] t[9] t[9] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2},{3,4},{5,6}",orbit_size=105,
tree=[t[1] t[8] t[11] t[11] t[11] t[11] t[11];
t[8] t[2] t[11] t[11] t[11] t[11] t[11];
t[11] t[11] t[3] t[9] t[11] t[11] t[11];
t[11] t[11] t[9] t[4] t[11] t[11] t[11];
t[11] t[11] t[11] t[11] t[5] t[10] t[11];
t[11] t[11] t[11] t[11] t[10] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2},{3,4},{5,6,7}",orbit_size=105,
tree=[t[1] t[8] t[11] t[11] t[11] t[11] t[11];
t[8] t[2] t[11] t[11] t[11] t[11] t[11];
t[11] t[11] t[3] t[9] t[11] t[11] t[11];
t[11] t[11] t[9] t[4] t[11] t[11] t[11];
t[11] t[11] t[11] t[11] t[5] t[10] t[10];
t[11] t[11] t[11] t[11] t[10] t[6] t[10];
t[11] t[11] t[11] t[11] t[10] t[10] t[7]]),
(n=7,
id="{1,2},{3,4},{1,2,3,4}",orbit_size=105,
tree=[t[1] t[8] t[10] t[10] t[11] t[11] t[11];
t[8] t[2] t[10] t[10] t[11] t[11] t[11];
t[10] t[10] t[3] t[9] t[11] t[11] t[11];
t[10] t[10] t[9] t[4] t[11] t[11] t[11];
t[11] t[11] t[11] t[11] t[5] t[11] t[11];
t[11] t[11] t[11] t[11] t[11] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2},{1,2,3},{4,5,6,7}",orbit_size=105,
tree=[t[1] t[8] t[9] t[11] t[11] t[11] t[11];
t[8] t[2] t[9] t[11] t[11] t[11] t[11];
t[9] t[9] t[3] t[11] t[11] t[11] t[11];
t[11] t[11] t[11] t[4] t[10] t[10] t[10];
t[11] t[11] t[11] t[10] t[5] t[10] t[10];
t[11] t[11] t[11] t[10] t[10] t[6] t[10];
t[11] t[11] t[11] t[10] t[10] t[10] t[7]]),
(n=7,
id="{1,2},{3,4,5,6},{3,4,5,6,7}",orbit_size=105,
tree=[t[1] t[8] t[11] t[11] t[11] t[11] t[11];
t[8] t[2] t[11] t[11] t[11] t[11] t[11];
t[11] t[11] t[3] t[9] t[9] t[9] t[10];
t[11] t[11] t[9] t[4] t[9] t[9] t[10];
t[11] t[11] t[9] t[9] t[5] t[9] t[10];
t[11] t[11] t[9] t[9] t[9] t[6] t[10];
t[11] t[11] t[10] t[10] t[10] t[10] t[7]]),
(n=7,
id="{1,2},{3,4,5,6},{1,2,3,4,5,6}",orbit_size=105,
tree=[t[1] t[8] t[10] t[10] t[10] t[10] t[11];
t[8] t[2] t[10] t[10] t[10] t[10] t[11];
t[10] t[10] t[3] t[9] t[9] t[9] t[11];
t[10] t[10] t[9] t[4] t[9] t[9] t[11];
t[10] t[10] t[9] t[9] t[5] t[9] t[11];
t[10] t[10] t[9] t[9] t[9] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2,3},{4,5,6},{1,2,3,7}",orbit_size=140,
tree=[t[1] t[8] t[8] t[11] t[11] t[11] t[10];
t[8] t[2] t[8] t[11] t[11] t[11] t[10];
t[8] t[8] t[3] t[11] t[11] t[11] t[10];
t[11] t[11] t[11] t[4] t[9] t[9] t[11];
t[11] t[11] t[11] t[9] t[5] t[9] t[11];
t[11] t[11] t[11] t[9] t[9] t[6] t[11];
t[10] t[10] t[10] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2},{3,4},{1,2,5,6,7}",orbit_size=210,
tree=[t[1] t[8] t[11] t[11] t[10] t[10] t[10];
t[8] t[2] t[11] t[11] t[10] t[10] t[10];
t[11] t[11] t[3] t[9] t[11] t[11] t[11];
t[11] t[11] t[9] t[4] t[11] t[11] t[11];
t[10] t[10] t[11] t[11] t[5] t[10] t[10];
t[10] t[10] t[11] t[11] t[10] t[6] t[10];
t[10] t[10] t[11] t[11] t[10] t[10] t[7]]),
(n=7,
id="{1,2},{3,4,5},{1,2,6,7}",orbit_size=210,
tree=[t[1] t[8] t[11] t[11] t[11] t[10] t[10];
t[8] t[2] t[11] t[11] t[11] t[10] t[10];
t[11] t[11] t[3] t[9] t[9] t[11] t[11];
t[11] t[11] t[9] t[4] t[9] t[11] t[11];
t[11] t[11] t[9] t[9] t[5] t[11] t[11];
t[10] t[10] t[11] t[11] t[11] t[6] t[10];
t[10] t[10] t[11] t[11] t[11] t[10] t[7]]),
(n=7,
id="{1,2},{3,4,5},{1,2,3,4,5}",orbit_size=210,
tree=[t[1] t[8] t[10] t[10] t[10] t[11] t[11];
t[8] t[2] t[10] t[10] t[10] t[11] t[11];
t[10] t[10] t[3] t[9] t[9] t[11] t[11];
t[10] t[10] t[9] t[4] t[9] t[11] t[11];
t[10] t[10] t[9] t[9] t[5] t[11] t[11];
t[11] t[11] t[11] t[11] t[11] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2},{3,4,5},{3,4,5,6,7}",orbit_size=210,
tree=[t[1] t[8] t[11] t[11] t[11] t[11] t[11];
t[8] t[2] t[11] t[11] t[11] t[11] t[11];
t[11] t[11] t[3] t[9] t[9] t[10] t[10];
t[11] t[11] t[9] t[4] t[9] t[10] t[10];
t[11] t[11] t[9] t[9] t[5] t[10] t[10];
t[11] t[11] t[10] t[10] t[10] t[6] t[10];
t[11] t[11] t[10] t[10] t[10] t[10] t[7]]),
(n=7,
id="{1,2,3,4},{1,2,3,4,5},{1,2,3,4,5,6}",orbit_size=210,
tree=[t[1] t[8] t[8] t[8] t[9] t[10] t[11];
t[8] t[2] t[8] t[8] t[9] t[10] t[11];
t[8] t[8] t[3] t[8] t[9] t[10] t[11];
t[8] t[8] t[8] t[4] t[9] t[10] t[11];
t[9] t[9] t[9] t[9] t[5] t[10] t[11];
t[10] t[10] t[10] t[10] t[10] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2},{3,4},{1,2,3,4,5}",orbit_size=315,
tree=[t[1] t[8] t[10] t[10] t[10] t[11] t[11];
t[8] t[2] t[10] t[10] t[10] t[11] t[11];
t[10] t[10] t[3] t[9] t[10] t[11] t[11];
t[10] t[10] t[9] t[4] t[10] t[11] t[11];
t[10] t[10] t[10] t[10] t[5] t[11] t[11];
t[11] t[11] t[11] t[11] t[11] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2},{3,4},{1,2,3,4,5,6}",orbit_size=315,
tree=[t[1] t[8] t[10] t[10] t[10] t[10] t[11];
t[8] t[2] t[10] t[10] t[10] t[10] t[11];
t[10] t[10] t[3] t[9] t[10] t[10] t[11];
t[10] t[10] t[9] t[4] t[10] t[10] t[11];
t[10] t[10] t[10] t[10] t[5] t[10] t[11];
t[10] t[10] t[10] t[10] t[10] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2},{1,2,3},{4,5,6}",orbit_size=420,
tree=[t[1] t[8] t[9] t[11] t[11] t[11] t[11];
t[8] t[2] t[9] t[11] t[11] t[11] t[11];
t[9] t[9] t[3] t[11] t[11] t[11] t[11];
t[11] t[11] t[11] t[4] t[10] t[10] t[11];
t[11] t[11] t[11] t[10] t[5] t[10] t[11];
t[11] t[11] t[11] t[10] t[10] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2},{1,2,3},{1,2,3,4}",orbit_size=420,
tree=[t[1] t[8] t[9] t[10] t[11] t[11] t[11];
t[8] t[2] t[9] t[10] t[11] t[11] t[11];
t[9] t[9] t[3] t[10] t[11] t[11] t[11];
t[10] t[10] t[10] t[4] t[11] t[11] t[11];
t[11] t[11] t[11] t[11] t[5] t[11] t[11];
t[11] t[11] t[11] t[11] t[11] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2},{1,2,3},{1,2,3,4,5,6}",orbit_size=420,
tree=[t[1] t[8] t[9] t[10] t[10] t[10] t[11];
t[8] t[2] t[9] t[10] t[10] t[10] t[11];
t[9] t[9] t[3] t[10] t[10] t[10] t[11];
t[10] t[10] t[10] t[4] t[10] t[10] t[11];
t[10] t[10] t[10] t[10] t[5] t[10] t[11];
t[10] t[10] t[10] t[10] t[10] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2},{3,4,5},{3,4,5,6}",orbit_size=420,
tree=[t[1] t[8] t[11] t[11] t[11] t[11] t[11];
t[8] t[2] t[11] t[11] t[11] t[11] t[11];
t[11] t[11] t[3] t[9] t[9] t[10] t[11];
t[11] t[11] t[9] t[4] t[9] t[10] t[11];
t[11] t[11] t[9] t[9] t[5] t[10] t[11];
t[11] t[11] t[10] t[10] t[10] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2},{3,4,5},{1,2,3,4,5,6}",orbit_size=420,
tree=[t[1] t[8] t[10] t[10] t[10] t[10] t[11];
t[8] t[2] t[10] t[10] t[10] t[10] t[11];
t[10] t[10] t[3] t[9] t[9] t[10] t[11];
t[10] t[10] t[9] t[4] t[9] t[10] t[11];
t[10] t[10] t[9] t[9] t[5] t[10] t[11];
t[10] t[10] t[10] t[10] t[10] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2},{1,2,3,4,5},{1,2,3,4,5,6}",orbit_size=420,
tree=[t[1] t[8] t[9] t[9] t[9] t[10] t[11];
t[8] t[2] t[9] t[9] t[9] t[10] t[11];
t[9] t[9] t[3] t[9] t[9] t[10] t[11];
t[9] t[9] t[9] t[4] t[9] t[10] t[11];
t[9] t[9] t[9] t[9] t[5] t[10] t[11];
t[10] t[10] t[10] t[10] t[10] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2,3},{1,2,3,4},{1,2,3,4,5}",orbit_size=420,
tree=[t[1] t[8] t[8] t[9] t[10] t[11] t[11];
t[8] t[2] t[8] t[9] t[10] t[11] t[11];
t[8] t[8] t[3] t[9] t[10] t[11] t[11];
t[9] t[9] t[9] t[4] t[10] t[11] t[11];
t[10] t[10] t[10] t[10] t[5] t[11] t[11];
t[11] t[11] t[11] t[11] t[11] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2,3},{1,2,3,4},{1,2,3,4,5,6}",orbit_size=420,
tree=[t[1] t[8] t[8] t[9] t[10] t[10] t[11];
t[8] t[2] t[8] t[9] t[10] t[10] t[11];
t[8] t[8] t[3] t[9] t[10] t[10] t[11];
t[9] t[9] t[9] t[4] t[10] t[10] t[11];
t[10] t[10] t[10] t[10] t[5] t[10] t[11];
t[10] t[10] t[10] t[10] t[10] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2,3},{1,2,3,4,5},{1,2,3,4,5,6}",orbit_size=420,
tree=[t[1] t[8] t[8] t[9] t[9] t[10] t[11];
t[8] t[2] t[8] t[9] t[9] t[10] t[11];
t[8] t[8] t[3] t[9] t[9] t[10] t[11];
t[9] t[9] t[9] t[4] t[9] t[10] t[11];
t[9] t[9] t[9] t[9] t[5] t[10] t[11];
t[10] t[10] t[10] t[10] t[10] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2},{3,4},{1,2,5}",orbit_size=630,
tree=[t[1] t[8] t[11] t[11] t[10] t[11] t[11];
t[8] t[2] t[11] t[11] t[10] t[11] t[11];
t[11] t[11] t[3] t[9] t[11] t[11] t[11];
t[11] t[11] t[9] t[4] t[11] t[11] t[11];
t[10] t[10] t[11] t[11] t[5] t[11] t[11];
t[11] t[11] t[11] t[11] t[11] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2},{3,4},{1,2,5,6}",orbit_size=630,
tree=[t[1] t[8] t[11] t[11] t[10] t[10] t[11];
t[8] t[2] t[11] t[11] t[10] t[10] t[11];
t[11] t[11] t[3] t[9] t[11] t[11] t[11];
t[11] t[11] t[9] t[4] t[11] t[11] t[11];
t[10] t[10] t[11] t[11] t[5] t[10] t[11];
t[10] t[10] t[11] t[11] t[10] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2},{1,2,3},{1,2,3,4,5}",orbit_size=630,
tree=[t[1] t[8] t[9] t[10] t[10] t[11] t[11];
t[8] t[2] t[9] t[10] t[10] t[11] t[11];
t[9] t[9] t[3] t[10] t[10] t[11] t[11];
t[10] t[10] t[10] t[4] t[10] t[11] t[11];
t[10] t[10] t[10] t[10] t[5] t[11] t[11];
t[11] t[11] t[11] t[11] t[11] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2},{1,2,3,4},{1,2,3,4,5}",orbit_size=630,
tree=[t[1] t[8] t[9] t[9] t[10] t[11] t[11];
t[8] t[2] t[9] t[9] t[10] t[11] t[11];
t[9] t[9] t[3] t[9] t[10] t[11] t[11];
t[9] t[9] t[9] t[4] t[10] t[11] t[11];
t[10] t[10] t[10] t[10] t[5] t[11] t[11];
t[11] t[11] t[11] t[11] t[11] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]]),
(n=7,
id="{1,2},{1,2,3,4},{1,2,3,4,5,6}",orbit_size=630,
tree=[t[1] t[8] t[9] t[9] t[10] t[10] t[11];
t[8] t[2] t[9] t[9] t[10] t[10] t[11];
t[9] t[9] t[3] t[9] t[10] t[10] t[11];
t[9] t[9] t[9] t[4] t[10] t[10] t[11];
t[10] t[10] t[10] t[10] t[5] t[10] t[11];
t[10] t[10] t[10] t[10] t[10] t[6] t[11];
t[11] t[11] t[11] t[11] t[11] t[11] t[7]])
]
end
| LinearCovarianceModels | https://github.com/saschatimme/LinearCovarianceModels.jl.git |
|
[
"MIT"
] | 0.2.2 | ecdb4458557d6a4ab428e1f3dbfc23438ee7a604 | code | 2493 | using LinearCovarianceModels
using LinearAlgebra, Test
using HomotopyContinuation
const HC = HomotopyContinuation
const LC = LinearCovarianceModels
@testset "LinearCovariance.jl" begin
@testset "vec to sym and back" begin
v = [1,2,3, 4, 5, 6]
@test vec_to_sym(v) == [1 2 3; 2 4 5; 3 5 6]
@test sym_to_vec(vec_to_sym(v)) == v
end
@testset "LCModels" begin
A = toeplitz(3)
@test A isa LCModel
x = variables(vec(A.Σ))
@test A.Σ == [x[1] x[2] x[3]
x[2] x[1] x[2]
x[3] x[2] x[1]]
@test dim(A) == 3
T = tree(4, "{1,2},{1,2,3}")
@test dim(T) == 7
Ts = trees(5)
@test all(isa.(last.(Ts), LCModel))
D = generic_diagonal(6, 3)
@test dim(D) == 3
@test_throws Union{ArgumentError,MethodError} generic_diagonal(6, 0)
@test_throws ArgumentError generic_diagonal(6, 7)
D = generic_subspace(6, 4)
@test dim(D) == 4
@test_throws ArgumentError generic_subspace(6, binomial(6+1,2)+1)
@test_throws Union{ArgumentError,MethodError} generic_subspace(6, 0)
# @test_throws ArgumentError LCModel(toeplitz(3).Σ .^2)
# throw for non-symmetric input
@test_throws ArgumentError LCModel([x[1] x[1]; x[2] x[1]])
# handle special matrix types
@var θ[1:7]
M = LCModel(SymTridiagonal(θ[1:4],θ[5:7]))
@test dim(M) == 7
end
@testset "mle system" begin
@var x y z
Σ = [x y; y z]
F = mle_system(LCModel(Σ))
@test F isa HC.System
end
@testset "ml_degree_witness" begin
Σ = toeplitz(3)
W = ml_degree_witness(Σ)
@test model(W) isa LCModel
@test is_dual(W) == false
@test ml_degree(W) == 3
@test length(solutions(W)) == 3
@test dim(model(W)) == 3
@test parameters(W) isa AbstractVector
@test verify(W)
@test verify(W; trace_tol=1e-6)
end
@testset "solve" begin
S = [4/5 -9/5 -1/25
-9/5 79/16 25/24
-1/25 25/24 17/16]
Σ = toeplitz(3)
W = ml_degree_witness(Σ)
crits = critical_points(W, S)
@test length(crits) == 3
sort!(crits; by=s -> s[2], rev=true)
p1,p2,p3 = last.(crits)
@test p1 == :global_maximum
@test p2 == :local_maximum
@test p3 == :saddle_point
@test crits[1][1] ≈ mle(W, S) atol=1e-8
end
end
| LinearCovarianceModels | https://github.com/saschatimme/LinearCovarianceModels.jl.git |
|
[
"MIT"
] | 0.2.2 | ecdb4458557d6a4ab428e1f3dbfc23438ee7a604 | docs | 2654 | # LinearCovarianceModels
[![][docs-stable-img]][docs-stable-url] [](https://travis-ci.com/saschatimme/LinearCovarianceModels.jl)
[`LinearCovarianceModels.jl`](https://github.com/saschatimme/LinearCovarianceModels) is a package for
computing Maximum Likelihood degrees and MLEs of linear covariance models using numerical nonlinear algebra.
In particular [HomotopyContinuation.jl](https://www.JuliaHomotopyContinuation.org).
## Installation
In order to use `LinearCovarianceModels.jl` you need to have at least Julia 1.1 installed. If this is not the case you can download it at [julialang.org](https://julialang.org). Please see the [platform specific instructions](https://julialang.org/downloads/platform.html) if you have trouble installing Julia.
The package can be installed by executing
```julia
julia> using Pkg; Pkg.add("LinearCovarianceModels")
```
in the Julia REPL.
If you are looking for a more IDE like experience take a look at [Juno](https://junolab.org).
## Introduction by Example
```julia
# load package
julia> using LinearCovarianceModels
# Create a linear covariance model
julia> Σ = toeplitz(3)
3-dimensional LCModel:
θ₁ θ₂ θ₃
θ₂ θ₁ θ₂
θ₃ θ₂ θ₁
# Compute a witness for the ML degree
julia> W = ml_degree_witness(Σ)
MLDegreeWitness:
• ML degree → 3
• model dimension → 3
• dual → false
# We offer the option to numerically verify the ML Degree
julia> verify(W)
Compute additional witnesses for completeness...
Found 10 additional witnesses
Found 10 additional witnesses
Compute trace...
Norm of trace: 2.6521474798326718e-12
true
# Consider the sample covariance matrix S
julia> S = [4/5 -9/5 -1/25
-9/5 79/16 25/24
-1/25 25/24 17/16];
# We use the ML degree witness set W to compute all critical points of the MLE
# problem.
julia> critical_points(W, S)
3-element Array{Tuple{Array{Float64,1},Float64,Symbol},1}:
([2.39038, -0.286009, 0.949965], -5.421751313919751, :local_maximum)
([2.52783, -0.215929, -1.45229], -5.346601549034418, :global_maximum)
([2.28596, -0.256394, 0.422321], -5.424161999175718, :saddle_point)
# If we are just interested in the MLE, there is also a shorthand.
julia> mle(W, S)
3-element Array{Float64,1}:
2.527832268219689
-0.21592947057775033
-1.4522862659134732
```
For more informations take a look at the [documentation](https://saschatimme.github.io/LinearCovarianceModels.jl/stable).
[docs-stable-img]: https://img.shields.io/badge/docs-stable-blue.svg
[docs-stable-url]: https://saschatimme.github.io/LinearCovarianceModels.jl/stable
| LinearCovarianceModels | https://github.com/saschatimme/LinearCovarianceModels.jl.git |
|
[
"MIT"
] | 0.2.2 | ecdb4458557d6a4ab428e1f3dbfc23438ee7a604 | docs | 2189 | # Introduction
[`LinearCovarianceModels.jl`](https://github.com/saschatimme/LinearCovarianceModels.jl) is a package for
computing Maximum Likelihood degrees of linear covariance models using numerical nonlinear algebra.
In particular [HomotopyContinuation.jl](https://www.JuliaHomotopyContinuation.org).
## Introduction by Example
```julia
# Create a linear covariance model
julia> Σ = toeplitz(3)
3-dimensional LCModel:
θ₁ θ₂ θ₃
θ₂ θ₁ θ₂
θ₃ θ₂ θ₁
# Compute a witness for the ML degree
julia> W = ml_degree_witness(Σ)
MLDegreeWitness:
• ML degree → 3
• model dimension → 3
• dual → false
# We offer the option to numerically verify the ML Degree
julia> verify(W)
Compute additional witnesses for completeness...
Found 10 additional witnesses
Found 10 additional witnesses
Compute trace...
Norm of trace: 2.6521474798326718e-12
true
# Consider the sample covariance matrix S
julia> S = [4/5 -9/5 -1/25
-9/5 79/16 25/24
-1/25 25/24 17/16];
# We use the ML degree witness set W to compute all critical points of the MLE
# problem.
julia> critical_points(W, S)
3-element Array{Tuple{Array{Float64,1},Float64,Symbol},1}:
([2.39038, -0.286009, 0.949965], -5.421751313919751, :local_maximum)
([2.52783, -0.215929, -1.45229], -5.346601549034418, :global_maximum)
([2.28596, -0.256394, 0.422321], -5.424161999175718, :saddle_point)
# If we are just interested in the MLE, there is also a shorthand.
julia> mle(W, S)
3-element Array{Float64,1}:
2.527832268219689
-0.21592947057775033
-1.4522862659134732
```
## Linear Covariance Models
The linear covariance models are wrapped in the `LCModel` type:
```@docs
LCModel
model
dim(::LCModel)
```
### Default models
The following linear covariance models are provided by default
```@docs
generic_subspace
generic_diagonal
toeplitz
tree
trees
```
## ML Degree
```@docs
ml_degree_witness
MLDegreeWitness
ml_degree
parameters(W::MLDegreeWitness)
solutions(W::MLDegreeWitness)
is_dual
verify
```
## Compute MLE for specific instances
```@docs
mle
critical_points
covariance_matrix
logl
gradient_logl
hessian_logl
classify_point
```
## Helper functions
```@docs
sym_to_vec
vec_to_sym
```
| LinearCovarianceModels | https://github.com/saschatimme/LinearCovarianceModels.jl.git |
|
[
"MIT"
] | 0.0.1 | 7f5b02258a3ca0221a6a9710b0a0a2e8fb4957fe | code | 582 | using Documenter
using LatticeRules
makedocs(;
modules=[LatticeRules],
authors="Pieterjan Robbe <[email protected]> and contributors",
repo="https://github.com/PieterjanRobbe/LatticeRules.jl/blob/{commit}{path}#L{line}",
sitename="LatticeRules.jl",
format=Documenter.HTML(;
prettyurls=get(ENV, "CI", "false") == "true",
canonical="https://PieterjanRobbe.github.io/LatticeRules.jl",
assets=String[],
),
pages=[
"Home" => "index.md",
],
)
deploydocs(;
repo="github.com/PieterjanRobbe/LatticeRules.jl",
)
| LatticeRules | https://github.com/PieterjanRobbe/LatticeRules.jl.git |
|
[
"MIT"
] | 0.0.1 | 7f5b02258a3ca0221a6a9710b0a0a2e8fb4957fe | code | 4890 | #
# LatticeRule32
#
struct LatticeRule32{s} <: AbstractLatticeRule{s}
z::Vector{UInt32} # generating vector
n::Int64 # max number of points in the lattice rule
end
# default lattice rule type
const LatticeRule = LatticeRule32
# access max number of points in the lattice
Base.length(lattice_rule::LatticeRule32) = lattice_rule.n
# uinttype
uinttype(::LatticeRule32) = UInt32
"""
LatticeRule32(z, s, n)
LatticeRule32(z, s)
LatticeRule32(z)
Returns a rank-1 lattice rule in `s` dimensions with generating vector `z` and at most `n` points.
When no maximum number of points `n` is provided, we assume `n = typemax(UInt32) = 2^32 - 1`. When no number of dimensions `s` is provided, we assume `s = length(z)`.
!!! info
Technically, we return an extensible lattice sequence where the `k`-th point is transformed using the gray coded radical inverse function. This has the advantage that we can add points to the lattice without changing the already computed points.
More generating vectors can be found online [here](https://web.maths.unsw.edu.au/~fkuo/lattice/index.html) or [here](https://people.cs.kuleuven.be/~dirk.nuyens/qmc-generators/).
# Examples
```jldoctest; setup = :(using LatticeRules; import Random; Random.seed!(1))
julia> lattice_rule = LatticeRule32([UInt32(1), UInt32(5)], 2, 8) # Fibonacci lattice
LatticeRule32{2}
julia> getpoint(lattice_rule, 2)
2-element Array{Float64,1}:
0.25
0.25
```
See also: [`getpoint`](@ref), [`ShiftedLatticeRule32`](@ref)
"""
LatticeRule32(z::Vector{UInt32}) = LatticeRule32(z, length(z)) # specify generating vector
# specify generating vector and number of dimensions
LatticeRule32(z::Vector{UInt32}, s::Integer) = LatticeRule32(z, s, typemax(UInt32) + 1)
# specify generating vector, number of dimensions and maximum number of points
function LatticeRule32(z::Vector{UInt32}, s::Integer, n::Integer)
s > 0 || throw(ArgumentError("number of dimensions s must be larger than 0"))
s ≤ length(z) || throw(ArgumentError("number of dimensions s must be less than or equal to the length of the generating vector z"))
n > 0 || throw(ArgumentError("maximum number of points n must be larger than 0"))
n ≤ typemax(UInt32) + 1 || throw(ArgumentError("maximum number of points n must be less than or equal to 2^32, consider implementing a LatticeRule64 type"))
LatticeRule32{s}(view(z, 1:s), n)
end
"""
LatticeRule32(file, s, n)
LatticeRule32(file, s)
LatticeRule32(file)
Returns a rank-1 lattice rule in `s` dimensions with generating vector `z` read from the file `file` and with at most `n` points.
# Examples
```jldoctest; setup = :(using LatticeRules)
julia> z_file = K_3600_32_file;
julia> lattice_rule = LatticeRule32(z_file, 16)
LatticeRule32{16}
julia> getpoint(lattice_rule, 123)
16-element Array{Float64,1}:
0.8671875
0.9609375
0.6015625
0.8984375
0.6484375
0.6328125
0.3203125
0.2890625
0.0234375
0.1015625
0.7890625
0.0703125
0.6953125
0.0234375
0.1171875
0.0859375
```
See also: [`getpoint`](@ref), [`ShiftedLatticeRule32`](@ref)
"""
LatticeRule32(file::AbstractString) = LatticeRule32(read32(file)) # specify file containing generating vector
# specify file containting generating vector and number of dimensions
LatticeRule32(file::AbstractString, s::Integer) = LatticeRule32(read32(file), s)
# specify file containing generating vector, number of dimensions and maximum number of points
LatticeRule32(file::AbstractString, s::Integer, n::Integer) = LatticeRule32(read32(file), s, n)
"""
LatticeRule32(s)
Returns a rank-1 lattice rule in `s` dimensions that uses a default generating vector with order-2 weights.
# Examples
```jldoctest; setup = :(using LatticeRules)
julia> lattice_rule = LatticeRule32(16)
LatticeRule32{16}
julia> getpoint(lattice_rule, 123)
16-element Array{Float64,1}:
0.8671875
0.5390625
0.6015625
0.3671875
0.6796875
0.8203125
0.3046875
0.8515625
0.7109375
0.6328125
0.5703125
0.2578125
0.6953125
0.0390625
0.2421875
0.4453125
```
See also: [`getpoint`](@ref), [`ShiftedLatticeRule32`](@ref)
"""
function LatticeRule32(s::Integer) # specify number of dimensions only
s ≤ 3600 || throw(ArgumentError("number of dimensions s must be less than or equal to 3600, please supply your own generating vector z"))
s ≤ 250 ? LatticeRule32(CKN_250_20, s, 2^20) : LatticeRule32(K_3600_32, s)
end
# in-place version of unsafe_getpoint (with 0 memory allocations)
@inline function unsafe_getpoint!(x::Vector{<:AbstractFloat}, lattice_rule::LatticeRule32, k::UInt32)
ϕ_k = reversebits(k) * 2.0^(-32) # gray coded radical inverse function in base 2
@inbounds for i in 1:length(x)
x[i] = ϕ_k * lattice_rule.z[i]
x[i] -= floor(x[i]) # mod 1
end
x
end
# fancy printing
Base.show(io::IO, lattice_rule::LatticeRule32{s}) where s = print(io, string("LatticeRule32{", s, "}"))
| LatticeRules | https://github.com/PieterjanRobbe/LatticeRules.jl.git |
|
[
"MIT"
] | 0.0.1 | 7f5b02258a3ca0221a6a9710b0a0a2e8fb4957fe | code | 411 | module LatticeRules
import Random
export AbstractLatticeRule, LatticeRule, LatticeRule32, ShiftedLatticeRule, ShiftedLatticeRule32, getpoint, unsafe_getpoint, unsafe_getpoint!, ndims, length
export K_3600_32, K_3600_32_file, CKN_250_20, CKN_250_20_file # from lattice_data.jl
for file in ["common", "lattice_data", "LatticeRule32", "ShiftedLatticeRule32"]
include(string(file, ".jl"))
end
end # module
| LatticeRules | https://github.com/PieterjanRobbe/LatticeRules.jl.git |
|
[
"MIT"
] | 0.0.1 | 7f5b02258a3ca0221a6a9710b0a0a2e8fb4957fe | code | 3581 | #
# ShiftedLatticeRule32
#
struct ShiftedLatticeRule32{s, L, V} <: AbstractLatticeRule{s}
lattice_rule::L
Δ::V
end
# default shifted lattice rule type
const ShiftedLatticeRule = ShiftedLatticeRule32
# access max number of points in the lattice
Base.length(shifted_lattice_rule::ShiftedLatticeRule32) = length(shifted_lattice_rule.lattice_rule)
# uinttype
uinttype(::ShiftedLatticeRule32) = UInt32
"""
ShiftedLatticeRule32(lattice_rule)
ShiftedLatticeRule32(lattice_rule, shift)
Returns a shifted rank-1 lattice rule based on the lattice rule `lattice_rule` using the random shift `shift`. If no random shift is provided, we use `shift = rand(length(lattice_rule))`.
# Examples
```jldoctest; setup = :(using LatticeRules; import Random; Random.seed!(1))
julia> lattice_rule = LatticeRule32(16)
LatticeRule32{16}
julia> shifted_lattice_rule = ShiftedLatticeRule32(lattice_rule)
ShiftedLatticeRule32{16}
julia> getpoint(shifted_lattice_rule, 0)
16-element Array{Float64,1}:
0.23603334566204692
0.34651701419196046
0.3127069683360675
0.00790928339056074
0.4886128300795012
0.21096820215853596
0.951916339835734
0.9999046588986136
0.25166218303197185
0.9866663668987996
0.5557510873245723
0.43710797460962514
0.42471785049513144
0.773223048457377
0.2811902322857298
0.20947237319807077
```
See also: [`LatticeRule32`](@ref), [`getpoint`](@ref)
"""
ShiftedLatticeRule32(lattice_rule::LatticeRule32{s}) where s = ShiftedLatticeRule32(lattice_rule, rand(s)) # specify lattice rule
# specify lattice rule and random shift
function ShiftedLatticeRule32(lattice_rule::LatticeRule32{s}, Δ::Vector{<:AbstractFloat}) where s
length(Δ) == ndims(lattice_rule) || throw(DimensionMismatch("length of the random shift vector must be equal to the number of dimensions of the lattice rule, expected $(ndims(lattice_rule)), got $(length(Δ))"))
all(0 .≤ Δ .≤ 1) || throw(ArgumentError("random shift vector must contain uniformly distributed random numbers"))
ShiftedLatticeRule32{s, typeof(lattice_rule), typeof(Δ)}(lattice_rule, Δ)
end
"""
ShiftedLatticeRule32(s)
Returns a shifted rank-1 lattice rule in `s` dimensions that uses a default generating vector with order-2 weights and a randomly generated shift vector.
# Examples
```jldoctest; setup = :(using LatticeRules; import Random; Random.seed!(1))
julia> shifted_lattice_rule = ShiftedLatticeRule32(16)
ShiftedLatticeRule32{16}
julia> shifted_lattice_rule[0]
16-element Array{Float64,1}:
0.23603334566204692
0.34651701419196046
0.3127069683360675
0.00790928339056074
0.4886128300795012
0.21096820215853596
0.951916339835734
0.9999046588986136
0.25166218303197185
0.9866663668987996
0.5557510873245723
0.43710797460962514
0.42471785049513144
0.773223048457377
0.2811902322857298
0.20947237319807077
```
See also: [`getpoint`](@ref), [`ShiftedLatticeRule32`](@ref)
"""
ShiftedLatticeRule32(s::Integer) = ShiftedLatticeRule32(LatticeRule32(s)) # specify number of dimensions only
# in-place version of unsafe_getpoint (with 0 memory allocations)
@inline function unsafe_getpoint!(x::Vector{<:AbstractFloat}, shifted_lattice_rule::ShiftedLatticeRule32, k::UInt32)
ϕ_k = reversebits(k) * 2.0^(-32) # gray coded radical inverse function in base 2
@inbounds for i in 1:length(x)
x[i] = ϕ_k * shifted_lattice_rule.lattice_rule.z[i] + shifted_lattice_rule.Δ[i]
x[i] -= floor(x[i]) # mod 1
end
x
end
# fancy printing
Base.show(io::IO, shifted_lattice_rule::ShiftedLatticeRule32{s}) where s = print(io, string("ShiftedLatticeRule32{", s, "}"))
| LatticeRules | https://github.com/PieterjanRobbe/LatticeRules.jl.git |
|
[
"MIT"
] | 0.0.1 | 7f5b02258a3ca0221a6a9710b0a0a2e8fb4957fe | code | 2596 | #
# AbstractLatticeRule
#
abstract type AbstractLatticeRule{s} <: Random.AbstractRNG end
# number of dimensions
Base.ndims(::AbstractLatticeRule{s}) where s = s::Int
# size
Base.size(lattice_rule::AbstractLatticeRule) = (length(lattice_rule), )
# reverse bits (https://graphics.stanford.edu/~seander/bithacks.html#ReverseParallel)
reversebits(u::UInt32) = begin
u = ((u >> 1) & 0x55555555) | ((u & 0x55555555) << 1)
u = ((u >> 2) & 0x33333333) | ((u & 0x33333333) << 2)
u = ((u >> 4) & 0x0F0F0F0F) | ((u & 0x0F0F0F0F) << 4)
u = ((u >> 8) & 0x00FF00FF) | ((u & 0x00FF00FF) << 8)
u = ( u >> 16 ) | ( u << 16)
end
# read contents of file containing a generating vector and convert it to a Vector of UInt32's
read32(file::AbstractString) = parse.(UInt32, readlines(file))
"""
getpoint(lattice_rule, k)
Get the `k`-th point of the lattice rule `lattice_rule`.
!!! note
An alternative syntax is `getindex(lattice_rule, k)` or `lattice_rule[k]`, this allows you to write the one-liner `Q = mean(f.(lattice_rule[0:N-1]))` for the quasi-Monte Carlo estimator for ``E[f]``.
```jldoctest; setup = :(using LatticeRules)
julia> lattice_rule = LatticeRule32(2)
LatticeRule32{2}
julia> getpoint(lattice_rule, 3)
2-element Array{Float64,1}:
0.75
0.25
```
See also: [`LatticeRule32`](@ref), [`ShiftedLatticeRule32`](@ref)
"""
@inline function getpoint(lattice_rule::AbstractLatticeRule, k::Number) # get the k-th point of the lattice sequence
0 ≤ k < length(lattice_rule) || throw(BoundsError(lattice_rule, k))
unsafe_getpoint(lattice_rule, convert(uinttype(lattice_rule), k))
end
# get the k-th point of the sequence without bounds checking for 32 bit integers
@inline unsafe_getpoint(lattice_rule::AbstractLatticeRule{s}, k::UInt32) where s = begin
x = Vector{Float64}(undef, s)
unsafe_getpoint!(x, lattice_rule, k) # dispatch to AbstractLatticeRule subtype
end
# make LatticeRule iterable
Base.iterate(lattice_rule::AbstractLatticeRule, state=uinttype(lattice_rule)(0)) = state ≥ length(lattice_rule) ? nothing : (getpoint(lattice_rule, state), state + uinttype(lattice_rule)(1))
Base.eltype(::Type{<:AbstractLatticeRule}) = Vector{Float64}
# enable lattice_rule[i] access
Base.getindex(lattice_rule::AbstractLatticeRule, i::Number) = getpoint(lattice_rule, i)
Base.getindex(lattice_rule::AbstractLatticeRule, I) = [lattice_rule[i] for i in I]
Base.firstindex(lattice_rule::AbstractLatticeRule) = uinttype(lattice_rule)(0)
Base.lastindex(lattice_rule::AbstractLatticeRule) = uinttype(lattice_rule)(length(lattice_rule) - 1)
| LatticeRules | https://github.com/PieterjanRobbe/LatticeRules.jl.git |
|
[
"MIT"
] | 0.0.1 | 7f5b02258a3ca0221a6a9710b0a0a2e8fb4957fe | code | 436 | # default generating vector files
const K_3600_32_file = joinpath(@__DIR__(), "..", "generating_vectors", "K_3600_32.txt")
const CKN_250_20_file = joinpath(@__DIR__(), "..", "generating_vectors", "CKN_250_20.txt")
# re-compile if these files change
include_dependency(K_3600_32_file)
include_dependency(CKN_250_20_file)
# default generating vectors
const K_3600_32 = read32(K_3600_32_file)
const CKN_250_20 = read32(CKN_250_20_file)
| LatticeRules | https://github.com/PieterjanRobbe/LatticeRules.jl.git |
|
[
"MIT"
] | 0.0.1 | 7f5b02258a3ca0221a6a9710b0a0a2e8fb4957fe | code | 6925 | using LatticeRules, SpecialFunctions, Statistics, Test
@testset "LatticeRules" begin
@testset "Constructing LatticeRule" begin
# test constructor with generating vector, number of dimensions and max number of points
@testset "LatticeRule(z, s, n)" begin
lattice_rule = LatticeRule([UInt32(1), UInt32(5)], 2, 8)
@test ndims(lattice_rule) == 2
@test length(lattice_rule) == 8
@test size(lattice_rule) == (8,)
end
# test constructor with generating vector and number of dimensions
@testset "LatticeRule(z, s)" begin
lattice_rule = LatticeRule(K_3600_32, 16)
@test ndims(lattice_rule) == 16
@test length(lattice_rule) == 2^32
end
# test constructor with generating vector only
@testset "LatticeRule(z)" begin
lattice_rule = LatticeRule(K_3600_32, 250)
@test ndims(lattice_rule) == 250
@test length(lattice_rule) == 2^32
end
# test constructor with number of dimensions only
@testset "LatticeRule(s)" begin
lattice_rule = LatticeRule(251)
@test ndims(lattice_rule) == 251
@test length(lattice_rule) == 2^32
end
# test constructor with file containing generating vector, number of dimensions and max number of points
@testset "LatticeRule(z_file, s, n)" begin
lattice_rule = LatticeRule(CKN_250_20_file, 9, 2^20)
@test ndims(lattice_rule) == 9
@test length(lattice_rule) == 2^20
end
# test constructor with file containing generating vector and number of dimensions
@testset "LatticeRule(z_file, s)" begin
lattice_rule = LatticeRule(K_3600_32_file, 16)
@test ndims(lattice_rule) == 16
@test length(lattice_rule) == 2^32
end
# test constructor with file containing generating vector only
@testset "LatticeRule(z_file)" begin
lattice_rule = LatticeRule(K_3600_32_file)
@test ndims(lattice_rule) == 3600
@test length(lattice_rule) == 2^32
end
# test getpoint
@testset "getpoint(lattice_rule, k)" begin
lattice_rule = LatticeRule(10)
point0 = getpoint(lattice_rule, 0)
@inferred getpoint(lattice_rule, 10)
@test point0 isa eltype(LatticeRule)
@test sum(point0) == 0
end
# test iterator access
@testset "iterate(lattice_rule, state)" begin
lattice_rule = LatticeRule(32)
for i in lattice_rule
nothing
end
@test all(first(lattice_rule) .== lattice_rule[0])
end
# test lattice_rule[i] access
@testset "lattice_rule[i]" begin
lattice_rule = LatticeRule(2)
@test length(lattice_rule[0]) == 2
@inferred lattice_rule[100]
@test length(lattice_rule[end]) == 2
@test firstindex(lattice_rule) == 0
@test length(lattice_rule[1:20]) == 20
@test length(collect(lattice_rule)) == 2^20
end
# test error handling
@testset "error handling" begin
lattice_rule = LatticeRule(9)
@test_throws BoundsError getpoint(lattice_rule, -1)
@test_throws BoundsError getpoint(lattice_rule, 2^20)
@test_throws ArgumentError LatticeRule(0)
@test_throws ArgumentError LatticeRule(-1)
@test_throws ArgumentError LatticeRule(3601)
@test_throws ArgumentError LatticeRule(CKN_250_20, 251)
@test_throws ArgumentError LatticeRule(CKN_250_20_file, 12 ,0)
@test_throws ArgumentError LatticeRule(K_3600_32, 251, 2^32 + 1)
end
# test print method
@testset "show(lattice_rule)" begin
lattice_rule = LatticeRule(9)
str = string(lattice_rule)
end
end
@testset "Constructing ShiftedLatticeRule" begin
# test constructor with lattice rule and random shift
@testset "ShiftedLatticeRule(lattice_rule, Δ)" begin
lattice_rule = LatticeRule([UInt32(1), UInt32(5)], 2, 8)
shifted_lattice_rule = ShiftedLatticeRule(lattice_rule, rand(2))
@test ndims(shifted_lattice_rule) == 2
@test length(shifted_lattice_rule) == 8
end
# test constructor with lattice rule only
@testset "ShiftedLatticeRule(lattice_rule)" begin
lattice_rule = LatticeRule([UInt32(1), UInt32(5)], 2, 8)
shifted_lattice_rule = ShiftedLatticeRule(lattice_rule)
@test ndims(shifted_lattice_rule) == 2
@test length(shifted_lattice_rule) == 8
end
# test constructor with number of dimensions only
@testset "ShiftedLatticeRule(s)" begin
shifted_lattice_rule = ShiftedLatticeRule(251)
@test ndims(shifted_lattice_rule) == 251
@test length(shifted_lattice_rule) == 2^32
end
# test getpoint
@testset "getpoint(shifted_lattice_rule, k)" begin
shifted_lattice_rule = ShiftedLatticeRule(10)
point0 = getpoint(shifted_lattice_rule, 0)
@test point0 isa eltype(ShiftedLatticeRule)
@inferred getpoint(shifted_lattice_rule, 101)
@test all(0 .≤ point0 .≤ 1)
end
# test error handling
@testset "error handling" begin
lattice_rule = LatticeRule(100)
@test_throws DimensionMismatch ShiftedLatticeRule(lattice_rule, rand(101))
@test_throws ArgumentError ShiftedLatticeRule(lattice_rule, rand(100) .- 1)
@test_throws ArgumentError ShiftedLatticeRule(lattice_rule, rand(100) .+ 1)
end
# test print method
@testset "show(shifted_lattice_rule)" begin
shifted_lattice_rule = ShiftedLatticeRule(9)
str = string(shifted_lattice_rule)
end
end
# approximate pi by throwing random darts in a square
@testset "Approximating pi by throwing darts" begin
darts(x) = x[1]*x[1] + x[2]*x[2] < 1
lattice_rule = LatticeRule(2)
Q = 4 * mean(darts.(collect(lattice_rule)))
@test Q ≈ π rtol=1e-5
end
# see Keister, Bradley D. "Multidimensional quadrature algorithms." Computers in Physics 10.2 (1996): 119-128.
@testset "Computing multidimensional integral from Keister, Bradley" begin
dims = [9, 25, 60, 80, 100]
exact = [-71.633234291 -1.356914e6 4.89052986e14 6.78878724e19 4.57024396e24]
f(x) = cos(sqrt(sum(erfinv.(2*x .- 1).^2)))
for (d, I) in Iterators.zip(dims, exact)
lattice_rule = ShiftedLatticeRule(d)
Q = π^(d/2) * mean(f.(collect(lattice_rule)))
@test Q ≈ I rtol=1e-3
end
end
end
| LatticeRules | https://github.com/PieterjanRobbe/LatticeRules.jl.git |
|
[
"MIT"
] | 0.0.1 | 7f5b02258a3ca0221a6a9710b0a0a2e8fb4957fe | docs | 3837 | # LatticeRules
| **Documentation** | **Build Status** | **Coverage** |
|-------------------|------------------|--------------|
| [](https://PieterjanRobbe.github.io/LatticeRules.jl/stable) [](https://PieterjanRobbe.github.io/LatticeRules.jl/dev) | [](https://github.com/PieterjanRobbe/LatticeRules.jl/actions) [](https://travis-ci.com/PieterjanRobbe/LatticeRules.jl) [](https://ci.appveyor.com/project/PieterjanRobbe/LatticeRules-jl) | [](https://codecov.io/gh/PieterjanRobbe/LatticeRules.jl) [](https://coveralls.io/github/PieterjanRobbe/LatticeRules.jl?branch=master) |
This module provides an implementation of rank-1 lattice rules. Lattice rules generate "quasi-random" sequences of points in `d` dimensions which are equally distributed over the `d`-dimensional unit cube [0,1]<sup>d</sup>.
## Usage
To initialize a rank-1 lattice rule `lattice_rule` in `d` dimensions, use
```julia
using LatticeRules
my_lattice = LatticeRule(d)
```
Then
```julia
getpoint(my_lattice, 0)
```
or
```julia
my_lattice[0]
```
returns the first point of the lattice, and
```julia
my_lattice[k]
```
returns the `k`th point of the lattice.
For a `d`-dimensional function `f`,
```julia
f.(my_lattice[1:N])
```
gives an approximation for the integral of `f` using `N` lattice points.
Providing your own generating vector `z` is possible with
```julia
my_other_lattice = LatticeRule(z, d, n)
```
where `n` is the maximum number of points in the lattice.
In practice, it is more useful to have a shifted rank-1 lattice rule
```julia
my_shifted_lattice = ShiftedLatticeRule(d)
```
to obtain an error estimate in the same way as in the Monte Carlo method.
An existing lattice rule can be turned into a randomly shifted lattice rule using
```julia
my_other_shifted_lattice = ShiftedLatticeRule(my_lattice)
```
or
```julia
shift = rand(ndims(my_lattice))
my_final_shifted_lattice = ShiftedLatticeRule(my_lattice, shift)
```
optionally providing the random shift vector `shift`.
More extensive documentation can be found [here](https://PieterjanRobbe.github.io/LatticeRules.jl/dev).
## Example
A classic toy example to illustrate the Monte Carlo method is to approximate the value of π by throwing random darts on a square board. Suppose we draw a circle on the board with a diameter equal to the length of one side of the square. Then, the ratio of the area of the circle to the area of the square is π/4. If we now repeatedly throw darts at random on the board, the ratio of the number of darts that landed inside the circle and the total number of darts, multiplied by 4, is an approximation to π.
First, generate a lattice rule in two dimensions.
```julia
using LatticeRules, Statistics
my_lattice = LatticeRule(2)
```
The function `inside_circle` checks if a dart is inside the circle:
```julia
inside_circle(x) = x[1]*x[1] + x[2]*x[2] < 1
```
Our approximation for the value of π is
```
Q = 4 * mean(inside_circle.(collect(my_lattice)))
```
with `Q = 3.1416015625`.
## See also
- [The "Magic Point Shop" of QMC point generators and generating vectors](https://people.cs.kuleuven.be/~dirk.nuyens/qmc-generators/) by D. Nuyens
- [Lattice rule generating vectors](https://web.maths.unsw.edu.au/~fkuo/lattice/index.html) by F. Y. Kuo.
| LatticeRules | https://github.com/PieterjanRobbe/LatticeRules.jl.git |
|
[
"MIT"
] | 0.0.1 | 7f5b02258a3ca0221a6a9710b0a0a2e8fb4957fe | docs | 116 | ```@meta
CurrentModule = LatticeRules
```
# LatticeRules
```@index
```
```@autodocs
Modules = [LatticeRules]
```
| LatticeRules | https://github.com/PieterjanRobbe/LatticeRules.jl.git |
|
[
"MIT"
] | 0.1.1 | c1883f5328736a9a26107108e78229ed61d0ff94 | code | 78 | import Pkg
Pkg.instantiate()
using JuliaFormatter
format(dirname(@__DIR__))
| NamingConventions | https://github.com/raphasampaio/NamingConventions.jl.git |
|
[
"MIT"
] | 0.1.1 | c1883f5328736a9a26107108e78229ed61d0ff94 | code | 265 | import Pkg
Pkg.instantiate()
using Revise
Pkg.activate(dirname(@__DIR__))
Pkg.instantiate()
using NamingConventions
@info("""
This session is using NamingConventions.jl with Revise.jl.
For more information visit https://timholy.github.io/Revise.jl/stable/.
""")
| NamingConventions | https://github.com/raphasampaio/NamingConventions.jl.git |
|
[
"MIT"
] | 0.1.1 | c1883f5328736a9a26107108e78229ed61d0ff94 | code | 299 | module NamingConventions
include("abstract.jl")
include("decoder.jl")
include("encoder.jl")
include("convert.jl")
export
decode,
DecodingError,
encode,
AbstractNamingConvention,
CamelCase,
FlatCase,
KebabCase,
PascalCase,
ScreamingSnakeCase,
SnakeCase
end
| NamingConventions | https://github.com/raphasampaio/NamingConventions.jl.git |
|
[
"MIT"
] | 0.1.1 | c1883f5328736a9a26107108e78229ed61d0ff94 | code | 352 | abstract type AbstractNamingConvention end
struct CamelCase <: AbstractNamingConvention end
struct FlatCase <: AbstractNamingConvention end
struct KebabCase <: AbstractNamingConvention end
struct PascalCase <: AbstractNamingConvention end
struct ScreamingSnakeCase <: AbstractNamingConvention end
struct SnakeCase <: AbstractNamingConvention end
| NamingConventions | https://github.com/raphasampaio/NamingConventions.jl.git |
|
[
"MIT"
] | 0.1.1 | c1883f5328736a9a26107108e78229ed61d0ff94 | code | 165 | function convert(from::Type{<:AbstractNamingConvention}, to::Type{<:AbstractNamingConvention}, s::AbstractString)::String
return encode(to, decode(from, s))
end
| NamingConventions | https://github.com/raphasampaio/NamingConventions.jl.git |
|
[
"MIT"
] | 0.1.1 | c1883f5328736a9a26107108e78229ed61d0ff94 | code | 816 | struct DecodingError <: Exception
msg::String
end
function decode(::Type{CamelCase}, s::AbstractString)::Vector{String}
return lowercase.(split(s, r"(?<=[a-z])(?=[A-Z])"))
end
function decode(::Type{FlatCase}, s::AbstractString)::Vector{String}
throw(DecodingError("FlatCase cannot be decoded"))
end
function decode(::Type{KebabCase}, s::AbstractString)::Vector{String}
return lowercase.(split(s, '-'))
end
function decode(::Type{PascalCase}, s::AbstractString)::Vector{String}
return lowercase.(split(s, r"(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])"))
end
function decode(::Type{ScreamingSnakeCase}, s::AbstractString)::Vector{String}
return lowercase.(split(s, '_'))
end
function decode(::Type{SnakeCase}, s::AbstractString)::Vector{String}
return lowercase.(split(s, '_'))
end
| NamingConventions | https://github.com/raphasampaio/NamingConventions.jl.git |
|
[
"MIT"
] | 0.1.1 | c1883f5328736a9a26107108e78229ed61d0ff94 | code | 738 | function encode(::Type{CamelCase}, v::AbstractVector{<:AbstractString})::String
return v[1] * join(titlecase.(v[2:end]))
end
function encode(::Type{FlatCase}, v::AbstractVector{<:AbstractString})::String
return join(lowercase.(v), ' ')
end
function encode(::Type{KebabCase}, v::AbstractVector{<:AbstractString})::String
return join(lowercase.(v), '-')
end
function encode(::Type{PascalCase}, v::AbstractVector{<:AbstractString})::String
return join(titlecase.(v))
end
function encode(::Type{ScreamingSnakeCase}, v::AbstractVector{<:AbstractString})::String
return join(uppercase.(v), '_')
end
function encode(::Type{SnakeCase}, v::AbstractVector{<:AbstractString})::String
return join(lowercase.(v), '_')
end
| NamingConventions | https://github.com/raphasampaio/NamingConventions.jl.git |
|
[
"MIT"
] | 0.1.1 | c1883f5328736a9a26107108e78229ed61d0ff94 | code | 212 | function test_aqua()
@testset "Ambiguities" begin
Aqua.test_ambiguities(NamingConventions, recursive = false)
end
Aqua.test_all(NamingConventions, ambiguities = false)
return nothing
end
| NamingConventions | https://github.com/raphasampaio/NamingConventions.jl.git |
|
[
"MIT"
] | 0.1.1 | c1883f5328736a9a26107108e78229ed61d0ff94 | code | 1187 | struct ReversePascalCase <: AbstractNamingConvention end
function NamingConventions.encode(::Type{ReversePascalCase}, v::AbstractVector{<:AbstractString})::String
return join([lowercase(first(s)) * uppercase(s[2:end]) for s in v], "")
end
function NamingConventions.decode(::Type{ReversePascalCase}, s::AbstractString)::Vector{String}
return lowercase.(split(s, r"(?<=[A-Z])(?=[a-z])|(?<=[a-z])(?=[a-z][A-Z])"))
end
function test_custom_case()
@test NamingConventions.convert(CamelCase, ReversePascalCase, "camelCase") == "cAMELcASE"
@test NamingConventions.encode(ReversePascalCase, ["flat", "case"]) == "fLATcASE"
@test NamingConventions.convert(KebabCase, ReversePascalCase, "kebab-case") == "kEBABcASE"
@test NamingConventions.convert(PascalCase, ReversePascalCase, "PascalCase") == "pASCALcASE"
@test NamingConventions.convert(ReversePascalCase, ReversePascalCase, "rEVERSEpASCALcASE") == "rEVERSEpASCALcASE"
@test NamingConventions.convert(ScreamingSnakeCase, ReversePascalCase, "SCREAMING_SNAKE_CASE") == "sCREAMINGsNAKEcASE"
@test NamingConventions.convert(SnakeCase, ReversePascalCase, "snake_case") == "sNAKEcASE"
return nothing
end
| NamingConventions | https://github.com/raphasampaio/NamingConventions.jl.git |
|
[
"MIT"
] | 0.1.1 | c1883f5328736a9a26107108e78229ed61d0ff94 | code | 4440 | using NamingConventions
using Aqua
using Test
include("aqua.jl")
include("custom.jl")
function test_camel_case()
@test NamingConventions.convert(CamelCase, CamelCase, "camelCase") == "camelCase"
@test NamingConventions.encode(CamelCase, ["flat", "case"]) == "flatCase"
@test NamingConventions.convert(KebabCase, CamelCase, "kebab-case") == "kebabCase"
@test NamingConventions.convert(PascalCase, CamelCase, "PascalCase") == "pascalCase"
@test NamingConventions.convert(ScreamingSnakeCase, CamelCase, "SCREAMING_SNAKE_CASE") == "screamingSnakeCase"
@test NamingConventions.convert(SnakeCase, CamelCase, "snake_case") == "snakeCase"
return nothing
end
function test_flat_case()
@test_throws DecodingError NamingConventions.convert(FlatCase, CamelCase, "flatcase")
@test NamingConventions.convert(CamelCase, FlatCase, "camelCase") == "camel case"
@test NamingConventions.encode(FlatCase, ["flat", "case"]) == "flat case"
@test NamingConventions.convert(KebabCase, FlatCase, "kebab-case") == "kebab case"
@test NamingConventions.convert(PascalCase, FlatCase, "PascalCase") == "pascal case"
@test NamingConventions.convert(ScreamingSnakeCase, FlatCase, "SCREAMING_SNAKE_CASE") == "screaming snake case"
@test NamingConventions.convert(SnakeCase, FlatCase, "snake_case") == "snake case"
return nothing
end
function test_kebab_case()
@test NamingConventions.convert(CamelCase, KebabCase, "camelCase") == "camel-case"
@test NamingConventions.encode(KebabCase, ["flat", "case"]) == "flat-case"
@test NamingConventions.convert(KebabCase, KebabCase, "kebab-case") == "kebab-case"
@test NamingConventions.convert(PascalCase, KebabCase, "PascalCase") == "pascal-case"
@test NamingConventions.convert(ScreamingSnakeCase, KebabCase, "SCREAMING_SNAKE_CASE") == "screaming-snake-case"
@test NamingConventions.convert(SnakeCase, KebabCase, "snake_case") == "snake-case"
return nothing
end
function test_pascal_case()
@test NamingConventions.convert(CamelCase, PascalCase, "camelCase") == "CamelCase"
@test NamingConventions.encode(PascalCase, ["flat", "case"]) == "FlatCase"
@test NamingConventions.convert(KebabCase, PascalCase, "kebab-case") == "KebabCase"
@test NamingConventions.convert(PascalCase, PascalCase, "PascalCase") == "PascalCase"
@test NamingConventions.convert(ScreamingSnakeCase, PascalCase, "SCREAMING_SNAKE_CASE") == "ScreamingSnakeCase"
@test NamingConventions.convert(SnakeCase, PascalCase, "snake_case") == "SnakeCase"
return nothing
end
function test_screaming_snake_case()
@test NamingConventions.convert(CamelCase, ScreamingSnakeCase, "camelCase") == "CAMEL_CASE"
@test NamingConventions.encode(ScreamingSnakeCase, ["flat", "case"]) == "FLAT_CASE"
@test NamingConventions.convert(KebabCase, ScreamingSnakeCase, "kebab-case") == "KEBAB_CASE"
@test NamingConventions.convert(PascalCase, ScreamingSnakeCase, "PascalCase") == "PASCAL_CASE"
@test NamingConventions.convert(ScreamingSnakeCase, ScreamingSnakeCase, "SCREAMING_SNAKE_CASE") == "SCREAMING_SNAKE_CASE"
@test NamingConventions.convert(SnakeCase, ScreamingSnakeCase, "snake_case") == "SNAKE_CASE"
return nothing
end
function test_snake_case()
@test NamingConventions.convert(CamelCase, SnakeCase, "camelCase") == "camel_case"
@test NamingConventions.encode(SnakeCase, ["flat", "case"]) == "flat_case"
@test NamingConventions.convert(KebabCase, SnakeCase, "kebab-case") == "kebab_case"
@test NamingConventions.convert(PascalCase, SnakeCase, "PascalCase") == "pascal_case"
@test NamingConventions.convert(ScreamingSnakeCase, SnakeCase, "SCREAMING_SNAKE_CASE") == "screaming_snake_case"
@test NamingConventions.convert(SnakeCase, SnakeCase, "snake_case") == "snake_case"
return nothing
end
function test_all()
@testset "Aqua.jl" begin
test_aqua()
end
@testset "camel case" begin
test_camel_case()
end
@testset "flat case" begin
test_flat_case()
end
@testset "kebab case" begin
test_kebab_case()
end
@testset "pascal case" begin
test_pascal_case()
end
@testset "screaming snake case" begin
test_screaming_snake_case()
end
@testset "snake case" begin
test_snake_case()
end
@testset "custom case" begin
test_custom_case()
end
return nothing
end
test_all()
| NamingConventions | https://github.com/raphasampaio/NamingConventions.jl.git |
|
[
"MIT"
] | 0.1.1 | c1883f5328736a9a26107108e78229ed61d0ff94 | docs | 2842 | # NamingConventions.jl
[](https://github.com/raphasampaio/NamingConventions.jl/actions/workflows/CI.yml)
[](https://codecov.io/gh/raphasampaio/NamingConventions.jl)
[](https://github.com/JuliaTesting/Aqua.jl)
## Introduction
NamingConventions.jl is a lightweight and flexible Julia package that facilitates the conversion between various naming conventions commonly used in programming. The package supports the following conventions out-of-the-box:
- `CamelCase`: youTube
- `FlatCase`: youtube
- `KebabCase`: you-tube
- `PascalCase`: YouTube
- `ScreamingSnakeCase`: YOU_TUBE
- `SnakeCase`: you_tube
The flexibility of this package lies in its extensibility. By implementing your own encoding and decoding logic, you can create tailored naming conventions that fit your specific needs. All you need to do is define a new type that inherits from `AbstractNamingConvention` and implement the `encode` and `decode` functions.
## Getting Started
### Installation
```julia
julia> ] add NamingConventions
```
### Example 1: Converting Between Naming Conventions
Here is how you can easily convert between different naming styles:
```julia
using NamingConventions
# convert from snake_case to camelCase
@show NamingConventions.convert(SnakeCase, CamelCase, "snake_case") # output: "snakeCase"
# convert from camelCase to kebab-case
@show NamingConventions.convert(CamelCase, KebabCase, "camelCase") # output: "camel-case"
# convert from kebab-case to PascalCase
@show NamingConventions.convert(KebabCase, PascalCase, "kebab-case") # output: "KebabCase"
```
### Example 2: Defining a Custom Naming Convention
You can define your own naming conventions by creating a new type and implementing the required methods:
```julia
using NamingConventions
struct ReversePascalCase <: AbstractNamingConvention end
function NamingConventions.encode(::Type{ReversePascalCase}, v::AbstractString)
return join([lowercase(first(s)) * uppercase(s[2:end]) for s in v], "")
end
function NamingConventions.decode(::Type{ReversePascalCase}, s::AbstractString)
return lowercase.(split(s, r"(?<=[A-Z])(?=[a-z])|(?<=[a-z])(?=[a-z][A-Z])"))
end
# convert from camelCase to ReversePascalCase
@show NamingConventions.convert(CamelCase, ReversePascalCase, "camelCase") # output: "cAMELcASE"
# convert from snake_case to ReversePascalCase
@show NamingConventions.convert(SnakeCase, ReversePascalCase, "snake_case") # output: "sNAKEcASE"
```
## Contributing
Contributions, bug reports, and feature requests are welcome! Feel free to open an issue or submit a pull request.
| NamingConventions | https://github.com/raphasampaio/NamingConventions.jl.git |
|
[
"MIT"
] | 1.0.20 | 3b2db451a872b20519ebb0cec759d3d81a1c6bcb | code | 359 | using Documenter
using Mustache
makedocs(
sitename = "Mustache",
format = Documenter.HTML(),
modules = [Mustache]
)
# Documenter can also automatically deploy documentation to gh-pages.
# See "Hosting Documentation" and deploydocs() in the Documenter manual
# for more information.
deploydocs(
repo = "github.com/jverzani/Mustache.jl.git"
)
| Mustache | https://github.com/jverzani/Mustache.jl.git |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.