licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | code | 9236 | # # TEST 13H: square plate under harmonic loading. Modal model
# Source code: [`TEST13H_mod_tut.jl`](TEST13H_mod_tut.jl)
# ## Description
# Harmonic forced vibration problem is solved for a homogeneous square plate,
# simply-supported on the circumference. This is the TEST 13H from the Abaqus v
# 6.12 Benchmarks manual. The test is recommended by the National Agency for
# Finite Element Methods and Standards (U.K.): Test 13 from NAFEMS “Selected
# Benchmarks for Forced Vibration,” R0016, March 1993.
# The plate is discretized with hexahedral solid elements. The simple support
# condition is approximated by distributed rollers on the boundary. Because
# only the out of plane displacements are prevented, the structure has three
# rigid body modes in the plane of the plate.
# Homogeneous square plate, simply-supported on the circumference from the test
# 13 from NAFEMS “Selected Benchmarks for Forced Vibration,” R0016, March 1993.
# The nonzero benchmark frequencies are (in hertz): 2.377, 5.961, 5.961, 9.483,
# 12.133, 12.133, 15.468, 15.468 [Hz].
# The magnitude of the displacement for the fundamental frequency (2.377 Hz) is
# 45.42mm according to the reference solution.
# 
# This is the so-called modal model: the response is expressed as a linear
# combination of the eigenvectors. The finite element model is transformed into
# the modal space. The reduced matrices are in fact diagonal for the Rayleigh
# damping. The model natural frequencies can be evaluated fairly quickly, given
# the small size of the model, and a lot more frequencies can therefore be
# processed in the frequency sweep.
# ## References
# [1] NAFEMS “Selected Benchmarks for Forced Vibration,” R0016, March 1993.
# ## Goals
# - Show how to generate hexahedral mesh, mirroring and merging together parts.
# - Execute frequency sweep using a modal model.
##
# ## Definitions
# Bring in required support.
using FinEtools
using FinEtools.AlgoBaseModule: matrix_blocked, vector_blocked
using FinEtoolsDeforLinear
using LinearAlgebra
using Arpack
# The input parameters come from [1].
E = 200*phun("GPa");# Young's modulus
nu = 0.3;# Poisson ratio
rho = 8000*phun("kg*m^-3");# mass density
qmagn = 100.0*phun("Pa")
L = 10.0*phun("m"); # side of the square plate
t = 0.05*phun("m"); # thickness of the square plate
nL = 16; nt = 4;
tolerance = t/nt/100;
frequencies = vcat(linearspace(0.0,2.377,150), linearspace(2.377,15.0,400))
# Compute the parameters of Rayleigh damping. For the two selected
# frequencies we have the relationship between the damping ratio and
# the Rayleigh parameters
#
# $\xi_m=a_0/\omega_m+a_1\omega_m$
# where $m=1,2$. Solving for the Rayleigh parameters $a_0,a_1$ yields:
zeta1 = 0.02; zeta2 = 0.02;
o1 = 2*pi*2.377; o2 = 2*pi*15.468;
a0 = 2*(o1*o2)/(o2^2-o1^2)*(o2*zeta1-o1*zeta2);# a0
a1 = 2*(o1*o2)/(o2^2-o1^2)*(-1/o2*zeta1+1/o1*zeta2);# a1
##
# ## Discrete model
# Generate the finite element domain as a block.
fens,fes = H8block(L, L, t, nL, nL, nt)
# Create the geometry field.
geom = NodalField(fens.xyz)
# Create the displacement field. Note that it holds complex numbers.
u = NodalField(zeros(ComplexF64, size(fens.xyz,1), 3)) # displacement field
# In order to apply the essential boundary conditions we need to select the
# nodes along the side faces of the plate and support them in the Z direction.
nl = selectnode(fens, box=[0.0 0.0 -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
nl = selectnode(fens, box=[L L -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
nl = selectnode(fens, box=[-Inf Inf 0.0 0.0 -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
nl = selectnode(fens, box=[-Inf Inf L L -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
# Those boundary conditions can now be applied to the displacement field,...
applyebc!(u)
# ... and the degrees of freedom can be numbered.
numberdofs!(u)
println("nfreedofs = $(nfreedofs(u))")
# The model is three-dimensional.
MR = DeforModelRed3D
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
# Given how relatively thin the plate is we choose an effective element: the
# mean-strain hexahedral element which is quite tolerant of the high aspect
# ratio.
femm = FEMMDeforLinearMSH8(MR, IntegDomain(fes, GaussRule(3,2)), material)
# These elements require to know the geometry before anything else can be
# computed using them in a finite element machine. Hence we first need to
# associate the geometry with the FEMM.
femm = associategeometry!(femm, geom)
# Now we can calculate the stiffness matrix and the mass matrix: both evaluated
# with the high-order Gauss rule.
K = stiffness(femm, geom, u)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, geom, u)
# The damping matrix is a linear combination of the mass matrix and the
# stiffness matrix (Rayleigh model).
C = a0*M + a1*K
# Extract the free-free block of the matrices.
M_ff = matrix_blocked(M, nfreedofs(u))[:ff]
K_ff = matrix_blocked(K, nfreedofs(u))[:ff]
C_ff = matrix_blocked(C, nfreedofs(u))[:ff]
# Find the boundary finite elements on top of the plate. The uniform distributed
# loading will be applied to these elements.
bdryfes = meshboundary(fes)
# Those facing up (in the positive Z direction) will be chosen:
topbfl = selectelem(fens, bdryfes, facing=true, direction=[0.0 0.0 1.0])
# A base finite element model machine will be created to evaluate the loading.
# The force intensity is created as driven by a function, but the function
# really only just fills the buffer with the constant loading vector.
function pfun(forceout::Vector{T}, XYZ, tangents, feid, qpid) where {T}
forceout .= [0.0, 0.0, qmagn]
return forceout
end
fi = ForceIntensity(Float64, 3, pfun);
# The loading vector is lumped from the distributed uniform loading by
# integrating on the boundary. Hence, the dimension of the integration domain
# is 2.
el1femm = FEMMBase(IntegDomain(subset(bdryfes,topbfl), GaussRule(2,2)))
F = distribloads(el1femm, geom, u, fi, 2);
F_f = vector_blocked(F, nfreedofs(u))[:f]
##
# ## Transformation into the modal model
# We will have to find the natural frequencies and mode shapes. Without much
# justification, we picked 60 natural frequencies to include. Usually this
# needs to be done carefully so that nothing important is missed. In this case
# the number may be an overkill.
OmegaShift = (0.01*2*pi)^2; # The frequency with which to shift
neigvs = 60
t0 = time()
evals, evecs, nconv = eigs(K_ff+OmegaShift*M_ff, M_ff; nev=neigvs, which=:SM)
evals .-= OmegaShift
@show tep = time() - t0
@show nconv == neigvs
# Now the matrices are reduced. In fact, if we are sure of the diagonal
# character of all these matrices, we can really only store diagonal
# matrices and even the solve of the modal equations of motion becomes trivial.
Mr = evecs' * M_ff * evecs
Kr = evecs' * K_ff * evecs
Cr = evecs' * C_ff * evecs
# The loading also needs to be transformed (projected) into the modal vector
# space.
Fr = evecs' * F_f
##
# ## Sweep through the frequencies
# Sweep through the frequencies and calculate the complex displacement vector
# for each of the frequencies from the complex balance equations of the
# structure.
# The entire solution will be stored in this array:
U1 = zeros(ComplexF64, nfreedofs(u), length(frequencies))
print("Sweeping through $(length(frequencies)) frequencies\n")
t0 = time()
for k in eachindex(frequencies)
f = frequencies[k];
omega = 2*pi*f;
# Solve the reduced equations.
Ur = (-omega^2*Mr + 1im*omega*Cr + Kr)\Fr;
# Reconstruct the solution in the finite element space.
U1[:, k] = evecs * Ur;
print(".")
end
print("\nTime = $(time()-t0)\n")
# Find the midpoint of the plate bottom surface. For this purpose the number of
# elements along the edge of the plate needs to be divisible by two.
midpoint = selectnode(fens, box=[L/2 L/2 L/2 L/2 0 0], inflate=tolerance);
# Check that we found that node.
@assert midpoint != []
# Extract the displacement component in the vertical direction (Z).
midpointdof = u.dofnums[midpoint, 3]
##
# ## Plot the results
using Gnuplot
# Plot the amplitude of the FRF.
umidAmpl = abs.(U1[midpointdof, :])/phun("MM")
@gp "set terminal windows 0 " :-
@gp :- vec(frequencies) vec(umidAmpl) "lw 2 lc rgb 'red' with lines title 'Displacement of the corner' " :-
@gp :- "set xlabel 'Frequency [Hz]'" :-
@gp :- "set ylabel 'Midpoint displacement amplitude [mm]'"
# Plot the FRF real and imaginary components.
umidReal = real.(U1[midpointdof, :])/phun("MM")
umidImag = imag.(U1[midpointdof, :])/phun("MM")
@gp "set terminal windows 1 " :-
@gp :- vec(frequencies) vec(umidReal) "lw 2 lc rgb 'red' with lines title 'Real' " :-
@gp :- vec(frequencies) vec(umidImag) "lw 2 lc rgb 'blue' with lines title 'Imaginary' " :-
@gp :- "set xlabel 'Frequency [Hz]'" :-
@gp :- "set ylabel 'Midpoint displacement FRF [mm]'"
# # Plot the shift of the FRF.
umidPhase = atan.(umidImag, umidReal)/pi*180
@gp "set terminal windows 2 " :-
@gp :- vec(frequencies) vec(umidPhase) "lw 2 lc rgb 'red' with lines title 'Phase shift' " :-
@gp :- "set xlabel 'Frequency [Hz]'" :-
@gp :- "set ylabel 'Phase shift [deg]'"
true
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | code | 8630 | # # TEST 13H: square plate under harmonic loading. Parallel execution.
# Source code: [`TEST13H_par_tut.jl`](TEST13H_par_tut.jl)
# ## Description
# Harmonic forced vibration problem is solved for a homogeneous square plate,
# simply-supported on the circumference. This is the TEST 13H from the Abaqus v
# 6.12 Benchmarks manual. The test is recommended by the National Agency for
# Finite Element Methods and Standards (U.K.): Test 13 from NAFEMS “Selected
# Benchmarks for Forced Vibration,” R0016, March 1993.
# The plate is discretized with hexahedral solid elements. The simple support
# condition is approximated by distributed rollers on the boundary. Because
# only the out of plane displacements are prevented, the structure has three
# rigid body modes in the plane of the plate.
# Homogeneous square plate, simply-supported on the circumference from the test
# 13 from NAFEMS “Selected Benchmarks for Forced Vibration,” R0016, March 1993.
# The nonzero benchmark frequencies are (in hertz): 2.377, 5.961, 5.961, 9.483,
# 12.133, 12.133, 15.468, 15.468 [Hz].
# The magnitude of the displacement for the fundamental frequency (2.377 Hz) is
# 45.42mm according to the reference solution.
# 
# The harmonic response loop is processed with multiple threads. The algorithm
# is embarrassingly parallel (i. e. no communication is required). Hence the
# parallel execution is particularly simple.
# ## References
# [1] NAFEMS “Selected Benchmarks for Forced Vibration,” R0016, March 1993.
# ## Goals
# - Show how to generate hexahedral mesh, mirroring and merging together parts.
# - Execute transient simulation by the trapezoidal-rule time stepping of [1].
##
# ## Definitions
# Bring in required support.
using FinEtools
using FinEtools.AlgoBaseModule: matrix_blocked, vector_blocked
using FinEtoolsDeforLinear
using LinearAlgebra
using Arpack
# The input parameters come from [1].
E = 200*phun("GPa");# Young's modulus
nu = 0.3;# Poisson ratio
rho = 8000*phun("kg*m^-3");# mass density
qmagn = 100.0*phun("Pa")
L = 10.0*phun("m"); # side of the square plate
t = 0.05*phun("m"); # thickness of the square plate
nL = 16; nt = 4;
tolerance = t/nt/100;
frequencies = vcat(linearspace(0.0,2.377,15), linearspace(2.377,15.0,40))
# Compute the parameters of Rayleigh damping. For the two selected
# frequencies we have the relationship between the damping ratio and
# the Rayleigh parameters
#
# $\xi_m=a_0/\omega_m+a_1\omega_m$
# where $m=1,2$. Solving for the Rayleigh parameters $a_0,a_1$ yields:
zeta1 = 0.02; zeta2 = 0.02;
o1 = 2*pi*2.377; o2 = 2*pi*15.468;
a0 = 2*(o1*o2)/(o2^2-o1^2)*(o2*zeta1-o1*zeta2);# a0
a1 = 2*(o1*o2)/(o2^2-o1^2)*(-1/o2*zeta1+1/o1*zeta2);# a1
##
# ## Discrete model
# Generate the finite element domain as a block.
fens,fes = H8block(L, L, t, nL, nL, nt)
# Create the geometry field.
geom = NodalField(fens.xyz)
# Create the displacement field. Note that it holds complex numbers.
u = NodalField(zeros(ComplexF64, size(fens.xyz,1), 3)) # displacement field
# In order to apply the essential boundary conditions we need to select the
# nodes along the side faces of the plate and support them in the Z direction.
nl = selectnode(fens, box=[0.0 0.0 -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
nl = selectnode(fens, box=[L L -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
nl = selectnode(fens, box=[-Inf Inf 0.0 0.0 -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
nl = selectnode(fens, box=[-Inf Inf L L -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
# Those boundary conditions can now be applied to the displacement field,...
applyebc!(u)
# ... and the degrees of freedom can be numbered.
numberdofs!(u)
println("nfreedofs = $(nfreedofs(u))")
# The model is three-dimensional.
MR = DeforModelRed3D
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
# Given how relatively thin the plate is we choose an effective element: the
# mean-strain hexahedral element which is quite tolerant of the high aspect
# ratio.
femm = FEMMDeforLinearMSH8(MR, IntegDomain(fes, GaussRule(3,2)), material)
# These elements require to know the geometry before anything else can be
# computed using them in a finite element machine. Hence we first need to
# associate the geometry with the FEMM.
femm = associategeometry!(femm, geom)
# Now we can calculate the stiffness matrix and the mass matrix: both evaluated
# with the high-order Gauss rule.
K = stiffness(femm, geom, u)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, geom, u)
# The damping matrix is a linear combination of the mass matrix and the
# stiffness matrix (Rayleigh model).
C = a0*M + a1*K
# Extract the free-free block of the matrices.
M_ff = matrix_blocked(M, nfreedofs(u))[:ff]
K_ff = matrix_blocked(K, nfreedofs(u))[:ff]
C_ff = matrix_blocked(C, nfreedofs(u))[:ff]
# Find the boundary finite elements on top of the plate. The uniform distributed
# loading will be applied to these elements.
bdryfes = meshboundary(fes)
# Those facing up (in the positive Z direction) will be chosen:
topbfl = selectelem(fens, bdryfes, facing=true, direction=[0.0 0.0 1.0])
# A base finite element model machine will be created to evaluate the loading.
# The force intensity is created as driven by a function, but the function
# really only just fills the buffer with the constant loading vector.
function pfun(forceout::Vector{T}, XYZ, tangents, feid, qpid) where {T}
forceout .= [0.0, 0.0, qmagn]
return forceout
end
fi = ForceIntensity(Float64, 3, pfun);
# The loading vector is lumped from the distributed uniform loading by
# integrating on the boundary. Hence, the dimension of the integration domain
# is 2.
el1femm = FEMMBase(IntegDomain(subset(bdryfes,topbfl), GaussRule(2,2)))
F = distribloads(el1femm, geom, u, fi, 2);
F_f = vector_blocked(F, nfreedofs(u))[:f]
##
# ## Sweep through the frequencies
# Sweep through the frequencies and calculate the complex displacement vector
# for each of the frequencies from the complex balance equations of the
# structure.
# The entire solution will be stored in this array:
U1 = zeros(ComplexF64, nfreedofs(u), length(frequencies))
# It is best to prevent the BLAS library from using threads concurrently with
# our own use of threads. The threads might easily become oversubscribed, with
# attendant slowdown.
LinearAlgebra.BLAS.set_num_threads(1)
# We utilize all the threads with which Julia was started. We can select the
# number of threads to use by running the executable as `julia -t n`, where `n`
# is the number of threads.
using Base.Threads
print("Number of threads: $(nthreads())\n")
print("Sweeping through $(length(frequencies)) frequencies\n")
t0 = time()
Threads.@threads for k in eachindex(frequencies)
f = frequencies[k];
omega = 2*pi*f;
U1[:, k] = (-omega^2*M_ff + 1im*omega*C_ff + K_ff)\F_f;
print(".")
end
print("\nTime = $(time()-t0)\n")
# On Windows the scaling is not great, which is not Julia's fault, but rather
# the operating system's failing. Linux usually gives much greater speedups.
# Find the midpoint of the plate bottom surface. For this purpose the number of
# elements along the edge of the plate needs to be divisible by two.
midpoint = selectnode(fens, box=[L/2 L/2 L/2 L/2 0 0], inflate=tolerance);
# Check that we found that node.
@assert midpoint != []
# Extract the displacement component in the vertical direction (Z).
midpointdof = u.dofnums[midpoint, 3]
##
# ## Plot the results
using Gnuplot
# Plot the amplitude of the FRF.
umidAmpl = abs.(U1[midpointdof, :])/phun("MM")
@gp "set terminal windows 0 " :-
@gp :- vec(frequencies) vec(umidAmpl) "lw 2 lc rgb 'red' with lines title 'Displacement of the corner' " :-
@gp :- "set xlabel 'Frequency [Hz]'" :-
@gp :- "set ylabel 'Midpoint displacement amplitude [mm]'"
# Plot the FRF real and imaginary components.
umidReal = real.(U1[midpointdof, :])/phun("MM")
umidImag = imag.(U1[midpointdof, :])/phun("MM")
@gp "set terminal windows 1 " :-
@gp :- vec(frequencies) vec(umidReal) "lw 2 lc rgb 'red' with lines title 'Real' " :-
@gp :- vec(frequencies) vec(umidImag) "lw 2 lc rgb 'blue' with lines title 'Imaginary' " :-
@gp :- "set xlabel 'Frequency [Hz]'" :-
@gp :- "set ylabel 'Midpoint displacement FRF [mm]'"
# # Plot the shift of the FRF.
umidPhase = atan.(umidImag, umidReal)/pi*180
@gp "set terminal windows 2 " :-
@gp :- vec(frequencies) vec(umidPhase) "lw 2 lc rgb 'red' with lines title 'Phase shift' " :-
@gp :- "set xlabel 'Frequency [Hz]'" :-
@gp :- "set ylabel 'Phase shift [deg]'"
true
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | code | 7759 | # # TEST 13H: square plate under harmonic loading
# Source code: [`TEST13H_tut.jl`](TEST13H_tut.jl)
# ## Description
# Harmonic forced vibration problem is solved for a homogeneous square plate,
# simply-supported on the circumference. This is the TEST 13H from the Abaqus v
# 6.12 Benchmarks manual. The test is recommended by the National Agency for
# Finite Element Methods and Standards (U.K.): Test 13 from NAFEMS “Selected
# Benchmarks for Forced Vibration,” R0016, March 1993.
# The plate is discretized with hexahedral solid elements. The simple support
# condition is approximated by distributed rollers on the boundary. Because
# only the out of plane displacements are prevented, the structure has three
# rigid body modes in the plane of the plate.
# Homogeneous square plate, simply-supported on the circumference from the test
# 13 from NAFEMS “Selected Benchmarks for Forced Vibration,” R0016, March 1993.
# The nonzero benchmark frequencies are (in hertz): 2.377, 5.961, 5.961, 9.483,
# 12.133, 12.133, 15.468, 15.468 [Hz].
# The magnitude of the displacement for the fundamental frequency (2.377 Hz) is
# 45.42mm according to the reference solution.
# 
# ## References
# [1] NAFEMS “Selected Benchmarks for Forced Vibration,” R0016, March 1993.
# ## Goals
# - Show how to generate hexahedral mesh, mirroring and merging together parts.
# - Execute transient simulation by the trapezoidal-rule time stepping of [1].
##
# ## Definitions
# Bring in required support.
using FinEtools
using FinEtools.AlgoBaseModule: matrix_blocked, vector_blocked
using FinEtoolsDeforLinear
using LinearAlgebra
using Arpack
# The input parameters come from [1].
E = 200*phun("GPa");# Young's modulus
nu = 0.3;# Poisson ratio
rho = 8000*phun("kg*m^-3");# mass density
qmagn = 100.0*phun("Pa")
L = 10.0*phun("m"); # side of the square plate
t = 0.05*phun("m"); # thickness of the square plate
nL = 16; nt = 4;
tolerance = t/nt/100;
frequencies = vcat(linearspace(0.0,2.377,15), linearspace(2.377,15.0,40))
# Compute the parameters of Rayleigh damping. For the two selected
# frequencies we have the relationship between the damping ratio and
# the Rayleigh parameters
#
# $\xi_m=a_0/\omega_m+a_1\omega_m$
# where $m=1,2$. Solving for the Rayleigh parameters $a_0,a_1$ yields:
zeta1 = 0.02; zeta2 = 0.02;
o1 = 2*pi*2.377; o2 = 2*pi*15.468;
a0 = 2*(o1*o2)/(o2^2-o1^2)*(o2*zeta1-o1*zeta2);# a0
a1 = 2*(o1*o2)/(o2^2-o1^2)*(-1/o2*zeta1+1/o1*zeta2);# a1
##
# ## Discrete model
# Generate the finite element domain as a block.
fens,fes = H8block(L, L, t, nL, nL, nt)
# Create the geometry field.
geom = NodalField(fens.xyz)
# Create the displacement field. Note that it holds complex numbers.
u = NodalField(zeros(ComplexF64, size(fens.xyz,1), 3)) # displacement field
# In order to apply the essential boundary conditions we need to select the
# nodes along the side faces of the plate and support them in the Z direction.
nl = selectnode(fens, box=[0.0 0.0 -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
nl = selectnode(fens, box=[L L -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
nl = selectnode(fens, box=[-Inf Inf 0.0 0.0 -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
nl = selectnode(fens, box=[-Inf Inf L L -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
# Those boundary conditions can now be applied to the displacement field,...
applyebc!(u)
# ... and the degrees of freedom can be numbered.
numberdofs!(u)
println("nfreedofs = $(nfreedofs(u))")
# The model is three-dimensional.
MR = DeforModelRed3D
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
# Given how relatively thin the plate is we choose an effective element: the
# mean-strain hexahedral element which is quite tolerant of the high aspect
# ratio.
femm = FEMMDeforLinearMSH8(MR, IntegDomain(fes, GaussRule(3,2)), material)
# These elements require to know the geometry before anything else can be
# computed using them in a finite element machine. Hence we first need to
# associate the geometry with the FEMM.
femm = associategeometry!(femm, geom)
# Now we can calculate the stiffness matrix and the mass matrix: both evaluated
# with the high-order Gauss rule.
K = stiffness(femm, geom, u)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, geom, u)
# The damping matrix is a linear combination of the mass matrix and the
# stiffness matrix (Rayleigh model).
C = a0*M + a1*K
# Extract the free-free block of the matrices.
M_ff = matrix_blocked(M, nfreedofs(u))[:ff]
K_ff = matrix_blocked(K, nfreedofs(u))[:ff]
C_ff = matrix_blocked(C, nfreedofs(u))[:ff]
# Find the boundary finite elements on top of the plate. The uniform distributed
# loading will be applied to these elements.
bdryfes = meshboundary(fes)
# Those facing up (in the positive Z direction) will be chosen:
topbfl = selectelem(fens, bdryfes, facing=true, direction=[0.0 0.0 1.0])
# A base finite element model machine will be created to evaluate the loading.
# The force intensity is created as driven by a function, but the function
# really only just fills the buffer with the constant loading vector.
function pfun(forceout::Vector{T}, XYZ, tangents, feid, qpid) where {T}
forceout .= [0.0, 0.0, qmagn]
return forceout
end
fi = ForceIntensity(Float64, 3, pfun);
# The loading vector is lumped from the distributed uniform loading by
# integrating on the boundary. Hence, the dimension of the integration domain
# is 2.
el1femm = FEMMBase(IntegDomain(subset(bdryfes,topbfl), GaussRule(2,2)))
F = distribloads(el1femm, geom, u, fi, 2);
F_f = vector_blocked(F, nfreedofs(u))[:f]
##
# ## Sweep through the frequencies
# Sweep through the frequencies and calculate the complex displacement vector
# for each of the frequencies from the complex balance equations of the
# structure.
# The entire solution will be stored in this array:
U1 = zeros(ComplexF64, nfreedofs(u), length(frequencies))
print("Sweeping through $(length(frequencies)) frequencies\n")
t0 = time()
for k in eachindex(frequencies)
f = frequencies[k];
omega = 2*pi*f;
U1[:, k] = (-omega^2*M_ff + 1im*omega*C_ff + K_ff)\F_f;
print(".")
end
print("\nTime = $(time()-t0)\n")
# Find the midpoint of the plate bottom surface. For this purpose the number of
# elements along the edge of the plate needs to be divisible by two.
midpoint = selectnode(fens, box=[L/2 L/2 L/2 L/2 0 0], inflate=tolerance);
# Check that we found that node.
@assert midpoint != []
# Extract the displacement component in the vertical direction (Z).
midpointdof = u.dofnums[midpoint, 3]
##
# ## Plot the results
using Gnuplot
# Plot the amplitude of the FRF.
umidAmpl = abs.(U1[midpointdof, :])/phun("MM")
@gp "set terminal windows 0 " :-
@gp :- vec(frequencies) vec(umidAmpl) "lw 2 lc rgb 'red' with lines title 'Displacement of the corner' " :-
@gp :- "set xlabel 'Frequency [Hz]'" :-
@gp :- "set ylabel 'Midpoint displacement amplitude [mm]'"
# Plot the FRF real and imaginary components.
umidReal = real.(U1[midpointdof, :])/phun("MM")
umidImag = imag.(U1[midpointdof, :])/phun("MM")
@gp "set terminal windows 1 " :-
@gp :- vec(frequencies) vec(umidReal) "lw 2 lc rgb 'red' with lines title 'Real' " :-
@gp :- vec(frequencies) vec(umidImag) "lw 2 lc rgb 'blue' with lines title 'Imaginary' " :-
@gp :- "set xlabel 'Frequency [Hz]'" :-
@gp :- "set ylabel 'Midpoint displacement FRF [mm]'"
# # Plot the shift of the FRF.
umidPhase = atan.(umidImag, umidReal)/pi*180
@gp "set terminal windows 2 " :-
@gp :- vec(frequencies) vec(umidPhase) "lw 2 lc rgb 'red' with lines title 'Phase shift' " :-
@gp :- "set xlabel 'Frequency [Hz]'" :-
@gp :- "set ylabel 'Phase shift [deg]'"
true
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | code | 6733 | # # Beam under on/off loading: transient response
# Source code: [`beam_load_on_off_tut.jl`](beam_load_on_off_tut.jl)
# ## Description
# A cantilever beam is loaded by a trapezoidal-pulse traction load at its free
# cross-section. The load is applied within 0.015 seconds and taken off after
# 0.37 seconds. The beam oscillates about its equilibrium configuration.
# The beam is modeled as a solid. Trapezoidal rule is used to integrate the
# equations of motion in time. Rayleigh mass- and stiffness-proportional
# damping is incorporated. The dynamic stiffness is factorized for efficiency.
# 
# ## Goals
# - Show how to create the discrete model, with implicit dynamics and proportional damping.
# - Apply distributed loading varying in time.
# - Demonstrate trapezoidal-rule time stepping.
##
# ## Definitions
# Basic imports.
using LinearAlgebra
using Arpack
# This is the finite element toolkit itself.
using FinEtools
using FinEtools.AlgoBaseModule: matrix_blocked, vector_blocked
# The linear stress analysis application is implemented in this package.
using FinEtoolsDeforLinear
using FinEtoolsDeforLinear.AlgoDeforLinearModule
# Input parameters
E = 205000*phun("MPa");# Young's modulus
nu = 0.3;# Poisson ratio
rho = 7850*phun("KG*M^-3");# mass density
loss_tangent = 0.005;
L = 200*phun("mm");
W = 4*phun("mm");
H = 8*phun("mm");
tolerance = W/500;
qmagn = 0.1*phun("MPa");
tend = 0.5*phun("SEC");
##
# ## Create the discrete model
MR = DeforModelRed3D
fens,fes = H8block(L, W, H, 50, 2, 4)
geom = NodalField(fens.xyz)
u = NodalField(zeros(size(fens.xyz,1),3)) # displacement field
nl = selectnode(fens, box=[L L -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 1)
setebc!(u, nl, true, 2)
setebc!(u, nl, true, 3)
applyebc!(u)
numberdofs!(u)
corner = selectnode(fens, nearestto=[0 0 0])
cornerzdof = u.dofnums[corner[1], 3]
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
femm = FEMMDeforLinearMSH8(MR, IntegDomain(fes, GaussRule(3,2)), material)
femm = associategeometry!(femm, geom)
K = stiffness(femm, geom, u)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, geom, u)
# Extract the free-free block of the matrices.
M_ff = matrix_blocked(M, nfreedofs(u))[:ff]
K_ff = matrix_blocked(K, nfreedofs(u))[:ff]
# Find the boundary finite elements at the tip cross-section of the beam. The
# uniform distributed loading will be applied to these elements.
bdryfes = meshboundary(fes)
# Those facing in the positive X direction will be chosen:
tipbfl = selectelem(fens, bdryfes, facing=true, direction=[-1.0 0.0 0.0])
# A base finite element model machine will be created to evaluate the loading.
# The force intensity is created as driven by a function, but the function
# really only just fills the buffer with the constant loading vector.
function pfun(forceout::Vector{T}, XYZ, tangents, feid, qpid) where {T}
forceout .= [0.0, 0.0, qmagn]
return forceout
end
fi = ForceIntensity(Float64, 3, pfun);
# The loading vector is lumped from the distributed uniform loading by
# integrating on the boundary. Hence, the dimension of the integration domain
# is 2.
el1femm = FEMMBase(IntegDomain(subset(bdryfes,tipbfl), GaussRule(2,2)))
F = distribloads(el1femm, geom, u, fi, 2);
F_f, F_d = vector_blocked(F, nfreedofs(u))[(:f, :d)]
# The loading function is defined as a time -dependent multiplier of the
# constant distribution of the loading on the structure.
function tmult(t)
if (t <= 0.015)
t/0.015
else
if (t >= 0.4)
0.0
else
if (t <= 0.385)
1.0
else
(t - 0.4)/(0.385 - 0.4)
end
end
end
end
##
# ## Time step determination
# We figure out the fundamental mode frequency, which will determine the time
# step is a fraction of the period.
evals, evecs = eigs(K_ff, M_ff; nev=1, which=:SM);
# The fundamental angular frequency is then:
omega_f = real(sqrt(evals[1]));
# We take the time step to be a fraction of the period of vibration in the
# fundamental mode.
@show dt = 0.05 * 1/(omega_f/2/pi);
##
# ## Damping model
# We take the damping to be representative of what's happening at the
# fundamental vibration frequency.
# For a given loss factor at a certain frequency $\omega_f$, the
# stiffness-proportional damping coefficient may be estimated as
# 2*loss_tangent/$\omega_f$, and the mass-proportional damping coefficient may be
# estimated as 2*loss_tangent*$\omega_f$.
Rayleigh_mass = (loss_tangent/2)*omega_f;
Rayleigh_stiffness = (loss_tangent/2)/omega_f;
# Now we construct the Rayleigh damping matrix as a linear combination of the
# stiffness and mass matrices.
C_ff = Rayleigh_stiffness * K_ff + Rayleigh_mass * M_ff
# The time stepping loop is protected by `let end` to avoid unpleasant surprises
# with variables getting clobbered by globals.
ts, corneruzs = let dt = dt, F_f = F_f
# Initial displacement, velocity, and acceleration.
U0 = gathersysvec(u)
v = deepcopy(u)
V0 = gathersysvec(v)
U1 = fill(0.0, length(V0))
V1 = fill(0.0, length(V0))
F0 = deepcopy(F_f)
F1 = fill(0.0, length(F0))
R = fill(0.0, length(F0))
# Factorize the dynamic stiffness
DSF = cholesky((M_ff + (dt/2)*C_ff + ((dt/2)^2)*K_ff))
# The times and displacements of the corner will be collected into two vectors
ts = Float64[]
corneruzs = Float64[]
# Let us begin the time integration loop:
t = 0.0;
step = 0;
F0 .= tmult(t) .* F_f
while t < tend
push!(ts, t)
push!(corneruzs, U0[cornerzdof])
t = t+dt;
step = step + 1;
(mod(step,100)==0) && println("Step $(step): $(t)")
# Set the time-dependent load
F1 .= tmult(t) .* F_f
# Compute the out of balance force.
R .= (M_ff*V0 - C_ff*(dt/2*V0) - K_ff*((dt/2)^2*V0 + dt*U0) + (dt/2)*(F0+F1));
# Calculate the new velocities.
V1 = DSF\R;
# Update the velocities.
U1 = U0 + (dt/2)*(V0+V1);
# Switch the temporary vectors for the next step.
U0, U1 = U1, U0;
V0, V1 = V1, V0;
F0, F1 = F1, F0;
if (t == tend) # Are we done yet?
break;
end
if (t+dt > tend) # Adjust the last time step so that we exactly reach tend
dt = tend-t;
end
end
ts, corneruzs # return the collected results
end
##
# ## Plot the results
using Gnuplot
@gp "set terminal windows 0 " :-
@gp :- ts corneruzs./phun("mm") "lw 2 lc rgb 'red' with lines title 'Displacement of the corner' "
@gp :- "set xlabel 'Time [s]'"
@gp :- "set ylabel 'Displacement [mm]'"
# The end.
true
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | code | 4515 | # # Tracking transient deformation of a cantilever beam: centered difference
# Source code: [`bending_wave_Ray_expl_cd_tut.jl`](bending_wave_Ray_expl_cd_tut.jl)
# ## Description
# A cantilever beam is given an initial velocity and then at time 0.0 it is
# suddenly stopped by fixing one of its ends. This sends a wave down the beam.
# The beam is modeled as a solid. Trapezoidal rule is used to integrate the
# equations of motion in time. No damping is present.
# ## Goals
# - Show how to create the discrete model for explicit dynamics.
# - Demonstrate centered difference time stepping.
##
# ## Definitions
# Basic imports.
using LinearAlgebra
using Arpack
# This is the finite element toolkit itself.
using FinEtools
using FinEtools.AlgoBaseModule: matrix_blocked, vector_blocked
# The linear stress analysis application is implemented in this package.
using FinEtoolsDeforLinear
using FinEtoolsDeforLinear.AlgoDeforLinearModule
# Input parameters
E = 205000*phun("MPa");# Young's modulus
nu = 0.3;# Poisson ratio
rho = 7850*phun("KG*M^-3");# mass density
loss_tangent = 0.0001;
frequency = 1/0.0058;
Rayleigh_mass = 2*loss_tangent*(2*pi*frequency);
L = 200*phun("mm");
W = 4*phun("mm");
H = 8*phun("mm");
tolerance = W/500;
vmag = 0.1*phun("m")/phun("SEC");
tend = 0.013*phun("SEC");
##
# ## Create the discrete model
MR = DeforModelRed3D
fens,fes = H8block(L,W,H, 50,1,4)
geom = NodalField(fens.xyz)
u = NodalField(zeros(size(fens.xyz,1),3)) # displacement field
nl = selectnode(fens, box=[L L -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 1)
setebc!(u, nl, true, 2)
setebc!(u, nl, true, 3)
applyebc!(u)
numberdofs!(u)
corner = selectnode(fens, nearestto=[0 0 0])
cornerzdof = u.dofnums[corner[1], 3]
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,2)), material)
femm = associategeometry!(femm, geom)
K = stiffness(femm, geom, u)
# Assemble the mass matrix as diagonal. The HRZ lumping technique is
# applied through the assembler of the sparse matrix.
hrzass = SysmatAssemblerSparseHRZLumpingSymm(0.0)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, hrzass, geom, u)
# Extract the free-free block of the matrices.
M_ff = matrix_blocked(M, nfreedofs(u))[:ff]
K_ff = matrix_blocked(K, nfreedofs(u))[:ff]
# Form the damping matrix.
C_ff = Rayleigh_mass * M_ff
# Figure out the highest frequency in the model, and use a time step that is
# considerably larger than the period of that highest frequency.
evals, evecs = eigs(K_ff, M_ff; nev=1, which=:LM);
@show dt = 0.99 * 2/real(sqrt(evals[1]));
# The time stepping loop is protected by `let end` to avoid unpleasant surprises
# with variables getting clobbered by globals.
ts, corneruzs = let dt = dt
# Initial displacement, velocity, and acceleration.
U0 = gathersysvec(u)
v = deepcopy(u)
v.values[:, 3] .= vmag
V0 = gathersysvec(v)
F1 = fill(0.0, length(V0))
U1 = fill(0.0, length(V0))
V1 = fill(0.0, length(V0))
A0 = fill(0.0, length(V0))
A1 = fill(0.0, length(V0))
# The times and displacements of the corner will be collected into two vectors
ts = Float64[]
corneruzs = Float64[]
# Let us begin the time integration loop:
t = 0.0;
step = 0;
while t < tend
push!(ts, t)
push!(corneruzs, U0[cornerzdof])
t = t+dt;
step = step + 1;
(mod(step,1000)==0) && println("Step $(step): $(t)")
# Zero out the load
fill!(F1, 0.0);
# Initial acceleration
if step == 1
A0 = M_ff \ (F1)
end
# Update displacement.
@. U1 = U0 + dt*V0 + (dt^2/2)*A0;
# Compute updated acceleration.
A1 .= M_ff \ (-K_ff*U1 + F1)
# Update the velocities.
@. V1 = V0 + (dt/2)*(A0 + A1)
# Switch the temporary vectors for the next step.
U0, U1 = U1, U0;
V0, V1 = V1, V0;
A0, A1 = A1, A0;
if (t == tend) # Are we done yet?
break;
end
if (t+dt > tend) # Adjust the last time step so that we exactly reach tend
dt = tend-t;
end
end
ts, corneruzs # return the collected results
end
##
# ## Plot the results
using Gnuplot
@gp "set terminal windows 4 " :-
@gp :- ts corneruzs./phun("mm") "lw 2 lc rgb 'red' with lines title 'Displacement of the corner' "
@gp :- "set xlabel 'Time [s]'"
@gp :- "set ylabel 'Displacement [mm]'"
# The end.
true
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | code | 5189 | # # Tracking transient deformation of a cantilever beam: lumped mass
# Source code: [`bending_wave_Ray_lumped_tut.jl`](bending_wave_Ray_lumped_tut.jl)
# ## Description
# A cantilever beam is given an initial velocity and then at time 0.0 it is
# suddenly stopped by fixing one of its ends. This sends a wave down the beam.
# The beam is modeled as a solid. Trapezoidal rule is used to integrate the
# equations of motion in time. Rayleigh mass-proportional damping is
# incorporated. The dynamic stiffness is factorized for efficiency.
# ## Goals
# - Show how to create the discrete model for implicit dynamics.
# - Demonstrate trapezoidal-rule time stepping.
##
# ## Definitions
# Basic imports.
using LinearAlgebra
using Arpack
# This is the finite element toolkit itself.
using FinEtools
using FinEtools.AlgoBaseModule: matrix_blocked, vector_blocked
# The linear stress analysis application is implemented in this package.
using FinEtoolsDeforLinear
using FinEtoolsDeforLinear.AlgoDeforLinearModule
# Input parameters
E = 205000*phun("MPa");# Young's modulus
nu = 0.3;# Poisson ratio
rho = 7850*phun("KG*M^-3");# mass density
loss_tangent = 0.0001;
frequency = 1/0.0058;
Rayleigh_mass = 2*loss_tangent*(2*pi*frequency);
L = 200*phun("mm");
W = 4*phun("mm");
H = 8*phun("mm");
tolerance = W/500;
vmag = 0.1*phun("m")/phun("SEC");
tend = 0.013*phun("SEC");
##
# ## Create the discrete model
MR = DeforModelRed3D
fens,fes = H8block(L,W,H, 50,1,4)
geom = NodalField(fens.xyz)
u = NodalField(zeros(size(fens.xyz,1),3)) # displacement field
nl = selectnode(fens, box=[L L -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 1)
setebc!(u, nl, true, 2)
setebc!(u, nl, true, 3)
applyebc!(u)
numberdofs!(u)
corner = selectnode(fens, nearestto=[0 0 0])
cornerzdof = u.dofnums[corner[1], 3]
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,2)), material)
femm = associategeometry!(femm, geom)
K = stiffness(femm, geom, u)
# Assemble the mass matrix as diagonal. The HRZ lumping technique is
# applied through the assembler of the sparse matrix.
hrzass = SysmatAssemblerSparseHRZLumpingSymm(0.0)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, hrzass, geom, u)
# Extract the free-free block of the matrices.
M_ff = matrix_blocked(M, nfreedofs(u))[:ff]
K_ff = matrix_blocked(K, nfreedofs(u))[:ff]
# Check visually that the mass matrix is in fact diagonal. We use
# the `findnz` function to retrieve the nonzeros in the matrix.
# Each such entry is then plotted as a point.
using Gnuplot
using SparseArrays
I, J, V = findnz(M_ff)
@gp "set terminal windows 1 " :-
@gp :- J I "with p" :-
@gp :- "set xlabel 'Column'" "set xrange [1:$(size(M_ff, 2))] " :-
@gp :- "set ylabel 'Row'" "set yrange [$(size(M_ff, 1)):1] "
# Find the relationship of the sum of all the elements of the
# mass matrix and the total mass of the structure.
@show sum(sum(M_ff))
@show L*W*H*rho
# Form the damping matrix.
C_ff = Rayleigh_mass * M_ff
# Figure out the highest frequency in the model, and use a time step that is
# considerably larger than the period of that highest frequency.
evals, evecs = eigs(K_ff, M_ff; nev=1, which=:LM);
@show dt = 350 * 2/real(sqrt(evals[1]));
# The time stepping loop is protected by `let end` to avoid unpleasant surprises
# with variables getting clobbered by globals.
ts, corneruzs = let dt = dt
# Initial displacement, velocity, and acceleration.
U0 = gathersysvec(u)
v = deepcopy(u)
v.values[:, 3] .= vmag
V0 = gathersysvec(v)
F0 = fill(0.0, length(V0))
U1 = fill(0.0, length(V0))
V1 = fill(0.0, length(V0))
F1 = fill(0.0, length(V0))
R = fill(0.0, length(V0))
# Factorize the dynamic stiffness
DSF = cholesky((M_ff + (dt/2)*C_ff + ((dt/2)^2)*K_ff))
# The times and displacements of the corner will be collected into two vectors
ts = Float64[]
corneruzs = Float64[]
# Let us begin the time integration loop:
t = 0.0;
step = 0;
while t < tend
push!(ts, t)
push!(corneruzs, U0[cornerzdof])
t = t+dt;
step = step + 1;
(mod(step,25)==0) && println("Step $(step): $(t)")
# Zero out the load
fill!(F1, 0.0);
# Compute the out of balance force.
R = (M_ff*V0 - C_ff*(dt/2*V0) - K_ff*((dt/2)^2*V0 + dt*U0) + (dt/2)*(F0+F1));
# Calculate the new velocities.
V1 = DSF\R;
# Update the velocities.
U1 = U0 + (dt/2)*(V0+V1);
# Switch the temporary vectors for the next step.
U0, U1 = U1, U0;
V0, V1 = V1, V0;
F0, F1 = F1, F0;
if (t == tend) # Are we done yet?
break;
end
if (t+dt > tend) # Adjust the last time step so that we exactly reach tend
dt = tend-t;
end
end
ts, corneruzs # return the collected results
end
##
# ## Plot the results
using Gnuplot
@gp "set terminal windows 0 " :-
@gp :- ts corneruzs./phun("mm") "lw 2 lc rgb 'red' with lines title 'Displacement of the corner' "
@gp :- "set xlabel 'Time [s]'"
@gp :- "set ylabel 'Displacement [mm]'"
# The end.
true
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | code | 4653 | # # Tracking transient deformation of a cantilever beam
# Source code: [`bending_wave_Ray_tut.jl`](bending_wave_Ray_tut.jl)
# ## Description
# A cantilever beam is given an initial velocity and then at time 0.0 it is
# suddenly stopped by fixing one of its ends. This sends a wave down the beam.
# The beam is modeled as a solid. Consistent mass matrix is used.
# Trapezoidal rule is used to integrate the
# equations of motion in time. Rayleigh mass-proportional damping is
# incorporated. The dynamic stiffness is factorized for efficiency.
# Deflection at the free end will look like this:
# 
# ## Goals
# - Show how to create the discrete model.
# - Demonstrate trapezoidal-rule time stepping.
##
# ## Definitions
# Basic imports.
using LinearAlgebra
using Arpack
# This is the finite element toolkit itself.
using FinEtools
using FinEtools.AlgoBaseModule: matrix_blocked, vector_blocked
# The linear stress analysis application is implemented in this package.
using FinEtoolsDeforLinear
using FinEtoolsDeforLinear.AlgoDeforLinearModule
# Input parameters
E = 205000*phun("MPa");# Young's modulus
nu = 0.3;# Poisson ratio
rho = 7850*phun("KG*M^-3");# mass density
loss_tangent = 0.0001;
frequency = 1/0.0058;
Rayleigh_mass = 2*loss_tangent*(2*pi*frequency);
L = 200*phun("mm");
W = 4*phun("mm");
H = 8*phun("mm");
tolerance = W/500;
vmag = 0.1*phun("m")/phun("SEC");
tend = 0.013*phun("SEC");
##
# ## Create the discrete model
MR = DeforModelRed3D
fens,fes = H8block(L,W,H, 50,1,4)
geom = NodalField(fens.xyz)
u = NodalField(zeros(size(fens.xyz,1),3)) # displacement field
nl = selectnode(fens, box=[L L -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 1)
setebc!(u, nl, true, 2)
setebc!(u, nl, true, 3)
applyebc!(u)
numberdofs!(u)
corner = selectnode(fens, nearestto=[0 0 0])
cornerzdof = u.dofnums[corner[1], 3]
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,2)), material)
femm = associategeometry!(femm, geom)
K = stiffness(femm, geom, u)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, geom, u)
# Extract the free-free block of the matrices.
M_ff = matrix_blocked(M, nfreedofs(u))[:ff]
K_ff = matrix_blocked(K, nfreedofs(u))[:ff]
# Find the relationship of the sum of all the elements of the
# mass matrix and the total mass of the structure.
@show sum(sum(M_ff))
@show L*W*H*rho
# Form the damping matrix.
C_ff = Rayleigh_mass * M_ff
# Figure out the highest frequency in the model, and use a time step that is
# considerably larger than the period of that highest frequency.
evals, evecs = eigs(K_ff, M_ff; nev=1, which=:LM);
@show dt = 350 * 2/real(sqrt(evals[1]));
# The time stepping loop is protected by `let end` to avoid unpleasant surprises
# with variables getting clobbered by globals.
ts, corneruzs = let dt = dt
# Initial displacement, velocity, and acceleration.
U0 = gathersysvec(u)
v = deepcopy(u)
v.values[:, 3] .= vmag
V0 = gathersysvec(v)
F0 = fill(0.0, length(V0))
U1 = fill(0.0, length(V0))
V1 = fill(0.0, length(V0))
F1 = fill(0.0, length(V0))
R = fill(0.0, length(V0))
# Factorize the dynamic stiffness
DSF = cholesky((M_ff + (dt/2)*C_ff + ((dt/2)^2)*K_ff))
# The times and displacements of the corner will be collected into two vectors
ts = Float64[]
corneruzs = Float64[]
# Let us begin the time integration loop:
t = 0.0;
step = 0;
while t < tend
push!(ts, t)
push!(corneruzs, U0[cornerzdof])
t = t+dt;
step = step + 1;
(mod(step,25)==0) && println("Step $(step): $(t)")
# Zero out the load
fill!(F1, 0.0);
# Compute the out of balance force.
R = (M_ff*V0 - C_ff*(dt/2*V0) - K_ff*((dt/2)^2*V0 + dt*U0) + (dt/2)*(F0+F1));
# Calculate the new velocities.
V1 = DSF\R;
# Update the velocities.
U1 = U0 + (dt/2)*(V0+V1);
# Switch the temporary vectors for the next step.
U0, U1 = U1, U0;
V0, V1 = V1, V0;
F0, F1 = F1, F0;
if (t == tend) # Are we done yet?
break;
end
if (t+dt > tend) # Adjust the last time step so that we exactly reach tend
dt = tend-t;
end
end
ts, corneruzs # return the collected results
end
##
# ## Plot the results
using Gnuplot
@gp "set terminal windows 0 " :-
@gp :- ts corneruzs./phun("mm") "lw 2 lc rgb 'red' with lines title 'Displacement of the corner' "
@gp :- "set xlabel 'Time [s]'"
@gp :- "set ylabel 'Displacement [mm]'"
# The end.
true
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | code | 183 | using Literate
for t in readdir(".")
if occursin(r".*_tut.jl", t)
println("\nTutorial $t in $(pwd())\n")
Literate.markdown(t, "."; documenter=false);
end
end
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | code | 4448 | # # Suddenly-stopped bar: Centered difference explicit
# Source code: [`sudden_stop_expl_cd_tut.jl`](sudden_stop_expl_cd_tut.jl)
# ## Description
# A bar is given an initial velocity and then at time 0.0 it is
# suddenly stopped by fixing one of its ends. This sends a wave down the bar.
# The output of the simulation is the velocity, which tends to reproduce
# the rectangular pulses in which the velocity bounces back and forth.
# The beam is modeled as a solid. The classical centered difference
# rule is used to integrate the
# equations of motion in time. No damping is present.
# ## Goals
# - Show how to create the discrete model for explicit dynamics.
# - Demonstrate centered difference explicit time stepping.
##
# ## Definitions
# Basic imports.
using LinearAlgebra
using Arpack
# This is the finite element toolkit itself.
using FinEtools
using FinEtools.AlgoBaseModule: matrix_blocked, vector_blocked
# The linear stress analysis application is implemented in this package.
using FinEtoolsDeforLinear
using FinEtoolsDeforLinear.AlgoDeforLinearModule
# Input parameters
E = 205000*phun("MPa");# Young's modulus
nu = 0.3;# Poisson ratio
rho = 7850*phun("KG*M^-3");# mass density
L = 20*phun("mm");
W = 1*phun("mm");
H = 1*phun("mm");
tolerance = W/500;
vmag = 0.1*phun("m")/phun("SEC");
tend = 0.00005*phun("SEC");
##
# ## Create the discrete model
MR = DeforModelRed3D
fens,fes = H8block(L,W,H, 80,4,4)
geom = NodalField(fens.xyz)
u = NodalField(zeros(size(fens.xyz,1),3)) # displacement field
nl = selectnode(fens, box=[0 0 -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 1)
applyebc!(u)
numberdofs!(u)
corner = selectnode(fens, nearestto=[L 0 0])
cornerxdof = u.dofnums[corner[1], 1]
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,2)), material)
femm = associategeometry!(femm, geom)
K = stiffness(femm, geom, u)
# Assemble the mass matrix as diagonal. The HRZ lumping technique is
# applied through the assembler of the sparse matrix.
hrzass = SysmatAssemblerSparseHRZLumpingSymm(0.0)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, hrzass, geom, u)
# Extract the free-free block of the matrices.
M_ff = matrix_blocked(M, nfreedofs(u))[:ff]
K_ff = matrix_blocked(K, nfreedofs(u))[:ff]
# Figure out the highest frequency in the model, and use a time step that is
# smaller than the period of the highest frequency.
evals, evecs = eigs(K_ff, M_ff; nev=1, which=:LM);
@show dt = 0.9 * 2/real(sqrt(evals[1]));
# The time stepping loop is protected by `let end` to avoid unpleasant surprises
# with variables getting clobbered by globals.
ts, cornervxs = let dt = dt
# Initial displacement, velocity, and acceleration.
U0 = gathersysvec(u)
v = deepcopy(u)
v.values[:, 1] .= -vmag
V0 = gathersysvec(v)
F1 = fill(0.0, length(V0))
U1 = fill(0.0, length(V0))
V1 = fill(0.0, length(V0))
A0 = fill(0.0, length(V0))
A1 = fill(0.0, length(V0))
phi = 1.005
# The times and displacements of the corner will be collected into two vectors
ts = Float64[]
cornervxs = Float64[]
# Let us begin the time integration loop:
t = 0.0;
step = 0;
while t < tend
push!(ts, t)
push!(cornervxs, V0[cornerxdof])
t = t+dt;
step = step + 1;
(mod(step,1000)==0) && println("Step $(step): $(t)")
# Zero out the load
fill!(F1, 0.0);
# Initial acceleration
if step == 1
A0 = M_ff \ (F1)
end
# Update displacement.
@. U1 = U0 + dt*V0 + (dt^2/2)*A0;
# Compute updated acceleration.
A1 .= M_ff \ (-K_ff*U1 + F1)
# Update the velocities.
@. V1 = V0 + (dt/2)*(A0 + A1)
# Switch the temporary vectors for the next step.
U0, U1 = U1, U0;
V0, V1 = V1, V0;
A0, A1 = A1, A0;
if (t == tend) # Are we done yet?
break;
end
if (t+dt > tend) # Adjust the last time step so that we exactly reach tend
dt = tend-t;
end
end
ts, cornervxs # return the collected results
end
##
# ## Plot the results
using Gnuplot
@gp "set terminal windows 7 " :-
@gp :- ts cornervxs./phun("mm") "lw 2 lc rgb 'red' with lines title 'Displacement of the corner' "
@gp :- "set xlabel 'Time [s]'"
@gp :- "set ylabel 'Velocity [mm/s]'"
# The end.
true
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | code | 4449 | # # Suddenly-stopped bar: TW explicit
# Source code: [`sudden_stop_expl_tw_tut.jl`](sudden_stop_expl_tw_tut.jl)
# ## Description
# A bar is given an initial velocity and then at time 0.0 it is
# suddenly stopped by fixing one of its ends. This sends a wave down the bar.
# The output of the simulation is the velocity, which tends to reproduce
# the rectangular pulses in which the velocity bounces back and forth.
# The beam is modeled as a solid. The Tchamwa-Wielgosz explicit rule
# is used to integrate the equations of motion in time. No damping is present.
# ## Goals
# - Show how to create the discrete model for explicit dynamics.
# - Demonstrate Tchamwa-Wielgosz explicit time stepping.
##
# ## Definitions
tst = time()
# Basic imports.
using LinearAlgebra
using Arpack
# This is the finite element toolkit itself.
using FinEtools
using FinEtools.AlgoBaseModule: matrix_blocked, vector_blocked
# The linear stress analysis application is implemented in this package.
using FinEtoolsDeforLinear
using FinEtoolsDeforLinear.AlgoDeforLinearModule
# Input parameters
E = 205000*phun("MPa");# Young's modulus
nu = 0.3;# Poisson ratio
rho = 7850*phun("KG*M^-3");# mass density
L = 20*phun("mm");
W = 1*phun("mm");
H = 1*phun("mm");
tolerance = W/500;
vmag = 0.1*phun("m")/phun("SEC");
tend = 0.00005*phun("SEC");
##
# ## Create the discrete model
MR = DeforModelRed3D
fens,fes = H8block(L,W,H, 80,4,4)
geom = NodalField(fens.xyz)
u = NodalField(zeros(size(fens.xyz,1),3)) # displacement field
nl = selectnode(fens, box=[0 0 -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 1)
applyebc!(u)
numberdofs!(u)
corner = selectnode(fens, nearestto=[L 0 0])
cornerxdof = u.dofnums[corner[1], 1]
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,2)), material)
femm = associategeometry!(femm, geom)
@time K = stiffness(femm, geom, u)
# Assemble the mass matrix as diagonal. The HRZ lumping technique is
# applied through the assembler of the sparse matrix.
hrzass = SysmatAssemblerSparseHRZLumpingSymm(0.0)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, hrzass, geom, u)
# Extract the free-free block of the matrices.
M_ff = matrix_blocked(M, nfreedofs(u))[:ff]
K_ff = matrix_blocked(K, nfreedofs(u))[:ff]
# Figure out the highest frequency in the model, and use a time step that is
# smaller than the period of the highest frequency.
evals, evecs = eigs(K_ff, M_ff; nev=1, which=:LM);
@show dt = 0.9 * 2/real(sqrt(evals[1]));
# The time stepping loop is protected by `let end` to avoid unpleasant surprises
# with variables getting clobbered by globals.
ts, cornervxs = let dt = dt
# Initial displacement, velocity, and acceleration.
U0 = gathersysvec(u)
v = deepcopy(u)
v.values[:, 1] .= -vmag
V0 = gathersysvec(v)
F1 = fill(0.0, length(V0))
U1 = fill(0.0, length(V0))
V1 = fill(0.0, length(V0))
A0 = fill(0.0, length(V0))
A1 = fill(0.0, length(V0))
phi = 1.05
# The times and displacements of the corner will be collected into two vectors
ts = Float64[]
cornervxs = Float64[]
# Let us begin the time integration loop:
t = 0.0;
step = 0;
while t < tend
push!(ts, t)
push!(cornervxs, V0[cornerxdof])
t = t+dt;
step = step + 1;
(mod(step,1000)==0) && println("Step $(step): $(t)")
# Zero out the load
fill!(F1, 0.0);
# Initial acceleration
if step == 1
A0 = M_ff \ (F1)
end
# Update displacement.
@. U1 = U0 + dt*V0 + (phi*dt^2)*A0;
# Update the velocities.
@. V1 = V0 + dt*A0
# Compute updated acceleration.
A1 .= M_ff \ (-K_ff*U1 + F1)
# Switch the temporary vectors for the next step.
U0, U1 = U1, U0;
V0, V1 = V1, V0;
A0, A1 = A1, A0;
if (t == tend) # Are we done yet?
break;
end
if (t+dt > tend) # Adjust the last time step so that we exactly reach tend
dt = tend-t;
end
end
ts, cornervxs # return the collected results
end
##
# ## Plot the results
using Gnuplot
@gp "set terminal windows 5 " :-
@gp :- ts cornervxs./phun("mm") "lw 2 lc rgb 'red' with lines title 'Displacement of the corner' "
@gp :- "set xlabel 'Time [s]'"
@gp :- "set ylabel 'Velocity [mm/s]'"
@show time()-tst
# The end.
true
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | code | 5670 | # # Twisted beam: Export solid model to Abaqus
# Source code: [`twisted_beam-export-to-abaqus_tut.jl`](twisted_beam-export-to-abaqus_tut.jl)
# ## Description
# In this example we show how to export a model to the finite element software Abaqus.
# The model is solved also in the example `twisted_beam_algo.jl`. Here we export the model for execution in Abaqus.
# The task begins with defining the input parameters, creating the mesh, identifying the nodes to which essential boundary conditions are to be applied, and extracting from the boundary the surface finite elements to which the traction loading at the end of the beam is to be applied.
# This is the finite element toolkit itself.
using FinEtools
# The linear stress analysis application is implemented in this package.
using FinEtoolsDeforLinear
# We will also use specifically some functions from these modules.
using FinEtoolsDeforLinear.AlgoDeforLinearModule
using FinEtools.MeshExportModule
# Define some parameters, in consistent units.
E = 0.29e8;
nu = 0.22;
W = 1.1;
L = 12.;
t = 0.32;
nl = 2; nt = 1; nw = 1; ref = 2;
p = 1/W/t;
# Loading in the Z direction. Reference (publication by Harder): 5.424e-3.
loadv = [0;0;p]; dir = 3; uex = 0.005424534868469;
# Loading in the Y direction. Reference (Harder): 1.754e-3. And comment the line below to obtain a solution in the other direction.
#loadv = [0;p;0]; dir = 2; uex = 0.001753248285256;
tolerance = t/1000;
fens,fes = H8block(L,W,t, nl*ref,nw*ref,nt*ref)
# Reshape the rectangular block into a twisted beam shape.
for i = 1:count(fens)
let
a = fens.xyz[i,1]/L*(pi/2); y = fens.xyz[i,2]-(W/2); z = fens.xyz[i,3]-(t/2);
fens.xyz[i,:] = [fens.xyz[i,1],y*cos(a)-z*sin(a),y*sin(a)+z*cos(a)];
end
end
# Clamped face of the beam: select all the nodes in this cross-section.
l1 = selectnode(fens; box = [0 0 -100*W 100*W -100*W 100*W], inflate = tolerance)
# Traction on the opposite face
boundaryfes = meshboundary(fes);
Toplist = selectelem(fens,boundaryfes, box = [L L -100*W 100*W -100*W 100*W], inflate = tolerance);
# The tutorial proper begins here. We create the Abaqus exporter and start writing the .inp file.
AE = AbaqusExporter("twisted_beam");
HEADING(AE, "Twisted beam example");
# The part definition is trivial: all will be defined rather for the instance of the part.
PART(AE, "part1");
END_PART(AE);
# The assembly will consist of a single instance (of the empty part defined above). The node set will be defined for the instance itself.
ASSEMBLY(AE, "ASSEM1");
INSTANCE(AE, "INSTNC1", "PART1");
NODE(AE, fens.xyz);
# We export the finite elements themselves. Note that the elements need to have distinct numbers. We start numbering the hexahedra at 1. The definition of the element creates simultaneously an element set which is used below in the section assignment (and the definition of the load).
ELEMENT(AE, "c3d8rh", "AllElements", 1, connasarray(fes))
# The traction is applied to surface elements. Because the elements in the Abaqus model need to have unique numbers, we need to start from an integer which is the number of the solid elements plus one.
ELEMENT(AE, "SFM3D4", "TractionElements", 1+count(fes), connasarray(subset(boundaryfes,Toplist)))
# The nodes in the clamped cross-section are going to be grouped in the node set `l1`.
NSET_NSET(AE, "l1", l1)
# We define a coordinate system (orientation of the material coordinate system), in this example it is the global Cartesian coordinate system. The sections are defined for the solid elements of the interior and the surface elements to which the traction is applied, and the assignment to the elements is by element set (`AllElements` and `TractionElements`). Note that for the solid section we also define reference to hourglass control named `Hourglassctl`.
ORIENTATION(AE, "GlobalOrientation", vec([1. 0 0]), vec([0 1. 0]));
SOLID_SECTION(AE, "elasticity", "GlobalOrientation", "AllElements", "Hourglassctl");
SURFACE_SECTION(AE, "TractionElements")
# This concludes the definition of the instance and of the assembly.
END_INSTANCE(AE);
END_ASSEMBLY(AE);
# This is the definition of the isotropic elastic material.
MATERIAL(AE, "elasticity")
ELASTIC(AE, E, nu)
# The element properties for the interior hexahedra are controlled by the section-control. In this case we are selecting enhanced hourglass stabilization (much preferable to the default stiffness stabilization).
SECTION_CONTROLS(AE, "Hourglassctl", "HOURGLASS=ENHANCED")
# The static perturbation analysis step is defined next.
STEP_PERTURBATION_STATIC(AE)
# The boundary conditions are applied directly to the node set `l1`. Since the node set is defined for the instance, we need to refer to it by the qualified name `ASSEM1.INSTNC1.l1`.
BOUNDARY(AE, "ASSEM1.INSTNC1.l1", 1)
BOUNDARY(AE, "ASSEM1.INSTNC1.l1", 2)
BOUNDARY(AE, "ASSEM1.INSTNC1.l1", 3)
# The traction is applied to the surface quadrilateral elements exported above.
DLOAD(AE, "ASSEM1.INSTNC1.TractionElements", vec(loadv))
# Now we have defined the analysis step and the definition of the model can be concluded.
END_STEP(AE)
close(AE)
# As quick check, here is the contents of the exported model file:
@show readlines("twisted_beam.inp")
# What remains is to load the model into Abaqus and execute it as a job. Alternatively Abaqus can be called on the input file to carry out the analysis at the command line as
# ```
# abaqus job=twisted_beam.inp
# ```
# The output database `twisted_beam.odb` can then be loaded for postprocessing, for instance from the command line as
# ```
# abaqus viewer database=twisted_beam.odb
# ```
nothing
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | code | 3674 | # # Vibration of a cube of nearly incompressible material: alternative models
# Source code: [`unit_cube_modes_alt_tut.jl`](unit_cube_modes_alt_tut.jl)
# ## Description
# Compute the free-vibration spectrum of a unit cube of nearly
# incompressible isotropic material, E = 1, ν = 0.499, and ρ = 1 (refer to [1]).
# Here we show how alternative finite element models compare:
# The solution with the serendipity quadratic hexahedron is supplemented with
# solutions obtained with advanced finite elements: nodal-integration energy
# stabilized hexahedra and tetrahedra, and mean-strain hexahedra and
# tetrahedra.
# ## References
# [1] Puso MA, Solberg J (2006) A stabilized nodally integrated tetrahedral. International Journal for Numerical Methods in Engineering 67: 841-867.
# [2] P. Krysl, Mean-strain 8-node hexahedron with optimized energy-sampling
# stabilization, Finite Elements in Analysis and Design 108 (2016) 41–53.
# 
# ## Goals
# - Show how to set up a simulation loop that will run all the models and collect data.
# - Show how to present the computed spectrum curves.
##
# ## Definitions
# This is the finite element toolkit itself.
using FinEtools
# The linear stress analysis application is implemented in this package.
using FinEtoolsDeforLinear
# Convenience import.
using FinEtools.MeshExportModule
# The eigenvalue problem is solved with the Lanczos algorithm from this package.
using Arpack
using SymRCM
# The material properties and dimensions are defined with physical units.
E = 1*phun("PA");
nu = 0.499;
rho = 1*phun("KG/M^3");
a = 1*phun("M"); # length of the side of the cube
N = 8
neigvs = 20 # how many eigenvalues
OmegaShift = (0.01*2*pi)^2; # The frequency with which to shift
# The model is fully three-dimensional, and hence the material model and the
# FEMM created below need to refer to an appropriate model-reduction scheme.
MR = DeforModelRed3D
material = MatDeforElastIso(MR, rho, E, nu, 0.0);
#
models = [
("H20", H20block, GaussRule(3,2), FEMMDeforLinear, 1),
("ESNICEH8", H8block, NodalTensorProductRule(3), FEMMDeforLinearESNICEH8, 2),
("ESNICET4", T4block, NodalSimplexRule(3), FEMMDeforLinearESNICET4, 2),
("MSH8", H8block, NodalTensorProductRule(3), FEMMDeforLinearMSH8, 2),
("MST10", T10block, TetRule(4), FEMMDeforLinearMST10, 1),
]
# Run the simulation loop over all the models.
sigdig(n) = round(n * 10000) / 10000
results = let
results = []
for m in models
fens, fes = m[2](a, a, a, m[5]*N, m[5]*N, m[5]*N);
@show count(fens)
geom = NodalField(fens.xyz)
u = NodalField(zeros(size(fens.xyz,1),3))
numbering = let
C = connectionmatrix(FEMMBase(IntegDomain(fes, m[3])), count(fens))
numbering = symrcm(C)
end
numberdofs!(u, numbering);
println("nfreedofs = $(nfreedofs(u))")
femm = m[4](MR, IntegDomain(fes, m[3]), material);
femm = associategeometry!(femm, geom)
K = stiffness(femm, geom, u)
M = mass(femm, geom, u);
evals, evecs, nconv = eigs(K+OmegaShift*M, M; nev=neigvs, which=:SM)
@show nconv == neigvs
evals = evals .- OmegaShift;
fs = real(sqrt.(complex(evals)))/(2*pi)
println("$(m[1]) eigenvalues: $(sigdig.(fs)) [Hz]")
push!(results, (m, fs))
end
results # return it
end
##
# ## Present the results graphically
using Gnuplot
@gp "set terminal windows 0 " :-
for r in results
@gp :- collect(1:length(r[2])) vec(r[2]) " lw 2 with lp title '$(r[1][1])' " :-
end
@gp :- "set xlabel 'Mode number [ND]'" :-
@gp :- "set ylabel 'Frequency [Hz]'"
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | code | 5805 | # # Vibration of a cube of nearly incompressible material
# Source code: [`unit_cube_modes_tut.jl`](unit_cube_modes_tut.jl)
# ## Description
# Compute the free-vibration spectrum of a unit cube of nearly
# incompressible isotropic material, E = 1, ν = 0.499, and ρ = 1 (refer to [1]).
# The solution with the `FinEtools` package is compared with a commercial
# software solution, and hence we also export the model to Abaqus.
# ## References
# [1] Puso MA, Solberg J (2006) A stabilized nodally integrated tetrahedral.
# International Journal for Numerical Methods in Engineering 67: 841-867.
# [2] P. Krysl, Mean-strain 8-node hexahedron with optimized energy-sampling
# stabilization, Finite Elements in Analysis and Design 108 (2016) 41–53.
# 
# ## Goals
# - Show how to generate simple mesh.
# - Show how to set up the discrete model for a free vibration problem.
# - Show how to export the model to Abaqus.
##
# ## Definitions
# This is the finite element toolkit itself.
using FinEtools
# The linear stress analysis application is implemented in this package.
using FinEtoolsDeforLinear
# Convenience import.
using FinEtools.MeshExportModule
# The eigenvalue problem is solved with the Lanczos algorithm from this package.
using Arpack
# The material properties and dimensions are defined with physical units.
E = 1*phun("PA");
nu = 0.499;
rho = 1*phun("KG/M^3");
a = 1*phun("M"); # length of the side of the cube
# We generate a mesh of 5 x 5 x 5 serendipity 20-node hexahedral elements in a
# regular grid.
fens, fes = H20block(a, a, a, 5, 5, 5);
# The problem is solved in three dimensions and hence we create the displacement
# field as three-dimensional with three displacement components per node. The
# degrees of freedom are then numbered (note that no essential boundary
# conditions are applied, since the cube is free-floating).
geom = NodalField(fens.xyz)
u = NodalField(zeros(size(fens.xyz,1),3)) # displacement field
numberdofs!(u);
# The model is fully three-dimensional, and hence the material model and the
# FEMM created below need to refer to an appropriate model-reduction scheme.
MR = DeforModelRed3D
material = MatDeforElastIso(MR, rho, E, nu, 0.0);
# Note that we compute the stiffness and the mass matrix using different FEMMs.
# The difference is only the quadrature rule chosen: in order to make the mass
# matrix non-singular, an accurate Gauss rule needs to be used, whereas for
# the stiffness matrix we want to avoid the excessive stiffness and therefore
# the reduced Gauss rule is used.
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,2)), material);
K = stiffness(femm, geom, u)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, geom, u);
# The free vibration problem can now be solved. In order for the eigenvalue
# solver to work well, we apply mass-shifting (otherwise the first matrix
# given to the solver – stiffness – would be singular). We specify the number
# of eigenvalues to solve for, and we guess the frequency with which to shift
# as 0.01 Hz.
neigvs = 20 # how many eigenvalues
OmegaShift = (0.01*2*pi)^2; # The frequency with which to shift
# The `eigs` routine can now be invoked to solve for a given number of
# frequencies from the smallest-magnitude end of the spectrum. Note that the
# mass shifting needs to be undone when the solution is obtained.
evals, evecs, nconv = eigs(K+OmegaShift*M, M; nev=neigvs, which=:SM)
@show nconv == neigvs
evals = evals .- OmegaShift;
fs = real(sqrt.(complex(evals)))/(2*pi)
sigdig(n) = round(n * 10000) / 10000
println("Eigenvalues: $(sigdig.(fs)) [Hz]")
# The first nonzero frequency, frequency 7, should be around .263 Hz.
# The computed mode can be visualized in Paraview. Use the "Animation view" to
# produce moving pictures for the mode.
mode = 7
scattersysvec!(u, evecs[:,mode])
File = "unit_cube_modes.vtk"
vtkexportmesh(File, fens, fes; vectors=[("mode$mode", u.values)])
@async run(`"paraview.exe" $File`);
# Finally we export the model to Abaqus. Note that we specify the mass
# density property (necessary for dynamics).
AE = AbaqusExporter("unit_cube_modes_h20");
HEADING(AE, "Vibration modes of unit cube of almost incompressible material.");
COMMENT(AE, "The first six frequencies are rigid body modes.");
COMMENT(AE, "The first nonzero frequency (7) should be around 0.26 Hz");
PART(AE, "part1");
END_PART(AE);
ASSEMBLY(AE, "ASSEM1");
INSTANCE(AE, "INSTNC1", "PART1");
NODE(AE, fens.xyz);
COMMENT(AE, "The hybrid form of the serendipity hexahedron is chosen because");
COMMENT(AE, "the material is nearly incompressible.");
ELEMENT(AE, "C3D20RH", "AllElements", 1, connasarray(fes))
ORIENTATION(AE, "GlobalOrientation", vec([1. 0 0]), vec([0 1. 0]));
SOLID_SECTION(AE, "elasticity", "GlobalOrientation", "AllElements");
END_INSTANCE(AE);
END_ASSEMBLY(AE);
MATERIAL(AE, "elasticity")
ELASTIC(AE, E, nu)
DENSITY(AE, rho)
STEP_FREQUENCY(AE, neigvs)
END_STEP(AE)
close(AE)
# It remains is to load the model into Abaqus and execute it as a job.
# Alternatively Abaqus can be called on the input file to carry out the
# analysis at the command line as
# ```
# abaqus job=unit_cube_modes_h20.inp
# ```
# The output database `unit_cube_modes_h20.odb` can then be loaded for
# postprocessing, for instance from the command line as
# ```
# abaqus viewer database=unit_cube_modes_h20.odb
# ```
# Don't forget to compare the computed frequencies and the mode shapes. For
# instance, the first six frequencies should be nearly 0, and the seventh
# frequency should be approximately 0.262 Hz. There may be very minor
# differences due to the fact that the FinEtools formulation is purely
# displacement-based, whereas the Abaqus model is hybrid (displacement plus
# pressure).
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 3944 | [](http://www.repostatus.org/#active)
[](https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl/actions)
[](https://app.codecov.io/gh/PetrKryslUCSD/FinEtoolsDeforLinear.jl)
[](https://petrkryslucsd.github.io/FinEtoolsDeforLinear.jl/latest)
[](https://octo-repo-visualization.vercel.app/?repo=PetrKryslUCSD/FinEtoolsDeforLinear.jl)
# FinEtoolsDeforLinear: Linear stress analysis application
[`FinEtools`](https://github.com/PetrKryslUCSD/FinEtools.jl.git) is a package
for basic operations on finite element meshes. `FinEtoolsDeforLinear` is a
package using `FinEtools` to solve linear stress analysis problems. Included is
statics and dynamics (modal analysis, steady-state vibration).
## News
- 05/27/2024: Remove unsupported functions. Adjust to FinEtools 8.
- 04/25/2024: Add utilities for split of isotropic elastic moduli.
[Past news](#past-news)
## Tutorials
There are a number of tutorials explaining the use of this package.
Check out the [index](https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl/blob/main/tutorials/index.md). The tutorials themselves can be executed as
follows:
- Download the package or clone it.
```
git clone https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git
```
- Change into the `tutorials` folder: `cd .\FinEtoolsDeforLinear.jl\tutorials`.
- Start Julia: `julia`.
- Activate the environment:
```
using Pkg; Pkg.activate("."); Pkg.instantiate();
```
- Execute the desired tutorial. Here `name.jl` is the name of the tutorial file:
```
include("name.jl")
```
## Examples
Many examples of solving for static and dynamic stress response with continuum FE models are available.
Begin with changing your working directory to the `examples` folder. Activate
and instantiate the examples environment.
```
using Pkg
Pkg.activate(".")
Pkg.instantiate()
```
There are a number of examples covering statics and dynamics. The examples may
be executed as described in the [conceptual guide to
`FinEtools`](https://petrkryslucsd.github.io/FinEtools.jl/latest).
## <a name="past-news"></a>Past news
- 02/25/2024: Update documentation.
- 02/01/2024: Improve test coverage of IM and ESNICE elements.
- 01/08/2024: Fix bug in the modal analysis algorithm.
- 12/31/2023: Update for Julia 1.10.
- 12/22/2023: Merge the tutorials into the package tree.
- 10/23/2023: Remove dependency on FinEtools predefined types (except for the data dictionary in the algorithm module).
- 06/21/2023: Update for FinEtools 7.0.
- 08/15/2022: Updated all examples.
- 08/09/2022: Updated all 3D dynamic examples.
- 04/03/2022: Examples now have their own project environment.
- 02/08/2021: Updated dependencies for Julia 1.6 and FinEtools 5.0.
- 08/23/2020: Added a separate tutorial package, [FinEtoolsDeforLinearTutorials.jl](https://petrkryslucsd.github.io/FinEtoolsDeforLinearTutorials.jl)).
- 08/17/2020: Added tutorials to the documentation.
- 04/02/2020: The examples still need to be updated, some don't work, sorry.
- 01/23/2020: Dependencies have been updated to work with Julia 1.3.1.
- 10/12/2019: Corrected a design flaw in the matrix utilities module.
- 07/30/2019: Support for fourth order tensors added.
- 07/27/2019: The interface to stress/strain conversion routines changed to be generic. The intent was to allow for automatic differentiation.
- 07/18/2019: Several step-by-step tutorials have been added.
- 06/11/2019: Applications are now separated out from the `FinEtools` package.
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 464 | Issues and ideas:
-- Documenter:
using FinEtoolsDeforLinear
using DocumenterTools
Travis.genkeys(user="PetrKryslUCSD", repo="https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl")
DocumenterTools.genkeys(user="PetrKryslUCSD", repo="[email protected]:PetrKryslUCSD/FinEtoolsDeforLinear.jl.git")
– Get rid of lumpedmass(): It is already implemented with an HRZ assembler.
-- Formatting: JuliaFormatter
using JuliaFormatter
format("./examples", SciMLStyle())
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 1052 | # FinEtoolsDeforLinear Documentation
The [`FinEtools`](https://petrkryslucsd.github.io/FinEtools.jl/latest/index.html) package is used here to solve linear deformation static and dynamic problems.
Tutorials are provided in the form of Julia scripts and Markdown files in a dedicated folder: [`index of tutorials`](https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl/blob/main/tutorials/index.md).
```@contents
```
## Conceptual guide
The construction of the toolkit is described: the composition of modules, the basic data structures, the methodology of computing quantities required in the finite element methodology, and more.
```@contents
Pages = [
"guide/guide.md",
]
Depth = 1
```
## Manual
The description of the types and the functions, organized by module and/or other logical principle.
```@contents
Pages = [
"man/man.md",
]
Depth = 2
```
## Tutorials
The [tutorials](https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl) are provided in the form of Julia scripts and Markdown files in the `tutorials` folder.
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 3451 | # Guide
The
[`FinEtools`](https://petrkryslucsd.github.io/FinEtools.jl/latest/index.html)
package is used here to solve linear stress analysis (deformation)
problems.
## Modules
The package `FinEtoolsDeforLinear` has the following structure:
- `FinEtoolsDeforLinear` is the top-level module.
- Linear deformation: `AlgoDeforLinearModule` (algorithms),
`DeforModelRedModule` (model-reduction definitions, 3D, plane strain
and stress, and so on), `FEMMDeforLinearBaseModule`,
`FEMMDeforLinearModule`, `FEMMDeforLinearMSModule`,
`FEMMDeforWinklerModule` (FEM machines to evaluate the matrix and
vector quantities), `MatDeforModule`, `MatDeforElastIsoModule`,
`MatDeforElastOrthoModule` (elastic material models).
## Linear deformation FEM machines
For the base machine for linear deformation, `FEMMDeforLinearBase`,
assumes standard isoparametric finite elements. It evaluates the
interior integrals:
- The stiffness matrix, the mass matrix.
- The load vector corresponding to thermal strains.
Additionally:
- Function to inspect integration points.
The FEM machine `FEMMDeforLinear` simply stores the data required by the
base `FEMMDeforLinearBase`.
The machine `FEMMDeforWinkler` is specialized for the boundary integrals
for bodies supported on continuously distributed springs:
- Compute the stiffness matrix corresponding to the springs.
The mean-strain FEM machine `FEMMDeforLinearMS` implements advanced
hexahedral and tetrahedral elements based on multi-field theory and
energy-sampling stabilization. It provides functions to compute:
- The stiffness matrix, the mass matrix.
- The load vector corresponding to thermal strains.
Additionally it defines:
- Function to inspect integration points.
## Materials for linear deformation analysis
The module `MatDeforModule` provides functions to convert between vector
and matrix (tensor) representations of stress and strain. Further,
functions to rotate stress and strain between different coordinate
systems (based upon the model-reduction type, 3-D, 2-D, or 1-D) are
provided.
Currently there are material types for isotropic and orthotropic linear
elastic materials. The user may add additional material types by
deriving from `AbstractMatDefor` and equipping them with three methods:
(1) compute the tangent moduli, (2) update the material state, (3)
compute the thermal strain.
For full generality, material types should implement these methods for
fully three-dimensional, plane strain and plane stress, 2D axially
symmetric, and one-dimensional deformation models.
## Linear deformation algorithms
There are algorithms for
- Linear static analysis;
- Export of the deformed shape for visualization;
- Export of the nodal and elementwise stress fields for visualization;
- Modal (free-vibration) analysis;
- Export of modal shapes for visualization;
- Subspace-iteration method implementation.
### Model data
Model data is a dictionary, with string keys, and arbitrary values.
The documentation string for each method of an algorithm lists the required input.
For instance, for the method `linearstatics` of the `AlgoDeforLinearModule`, the
`modeldata` dictionary needs to provide key-value pairs for the finite element node set, and
the regions, the boundary conditions, and so on.
The `modeldata` may be also supplemented with additional key-value pairs inside an algorithm
and returned for further processing by other algorithms.
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 2442 | # Reference Manual
## Simple FE model (volume)
```@autodocs
Modules = [FinEtools, FinEtoolsDeforLinear.FEMMDeforLinearBaseModule, FinEtoolsDeforLinear.FEMMDeforLinearModule, ]
Private = true
Order = [:function, :type]
```
## Simple FE models (surface)
```@autodocs
Modules = [FinEtools, FinEtoolsDeforLinear.FEMMDeforWinklerModule, FinEtoolsDeforLinear.FEMMDeforSurfaceDampingModule, ]
Private = true
Order = [:function, :type]
```
## Advanced: Mean-strain FEM
```@autodocs
Modules = [FinEtools, FinEtoolsDeforLinear.FEMMDeforLinearMSModule, ]
Private = true
Order = [:function, :type]
```
## Advanced: Nodal integration
```@autodocs
Modules = [FinEtools, FinEtoolsDeforLinear.FEMMDeforLinearNICEModule, FinEtoolsDeforLinear.FEMMDeforLinearESNICEModule, ]
Private = true
Order = [:function, :type]
```
## Advanced: Incompatible modes
```@autodocs
Modules = [FinEtools, FinEtoolsDeforLinear.FEMMDeforLinearIMModule]
Private = true
Order = [:function, :type]
```
## Algorithms
### Linear deformation
```@autodocs
Modules = [FinEtools, FinEtoolsDeforLinear.AlgoDeforLinearModule]
Private = true
Order = [:function]
```
## Material models
### Material for deformation, base functionality
```@autodocs
Modules = [FinEtools, FinEtoolsDeforLinear.MatDeforModule, ]
Private = true
Order = [:function, :type]
```
### Elasticity
```@autodocs
Modules = [FinEtools, FinEtoolsDeforLinear.MatDeforLinearElasticModule, ]
Private = true
Order = [:function, :type]
```
### Isotropic elasticity
```@autodocs
Modules = [FinEtools, FinEtoolsDeforLinear.MatDeforElastIsoModule, ]
Private = true
Order = [:function, :type]
```
### Orthotropic elasticity
```@autodocs
Modules = [FinEtools, FinEtoolsDeforLinear.MatDeforElastOrthoModule,]
Private = true
Order = [:function, :type]
```
## Modules
```@docs
FinEtoolsDeforLinear.FEMMDeforLinearESNICEModule
FinEtoolsDeforLinear.FEMMDeforLinearBaseModule
FinEtoolsDeforLinear.FinEtoolsDeforLinear
FinEtoolsDeforLinear.FEMMDeforLinearMSModule
FinEtoolsDeforLinear.MatDeforModule
FinEtoolsDeforLinear.FEMMDeforLinearModule
FinEtoolsDeforLinear.AlgoDeforLinearModule
FinEtoolsDeforLinear.FEMMDeforLinearNICEModule
FinEtoolsDeforLinear.FEMMDeforSurfaceDampingModule
FinEtoolsDeforLinear.MatDeforElastIsoModule
FinEtoolsDeforLinear.MatDeforLinearElasticModule
FinEtoolsDeforLinear.FEMMDeforWinklerModule
FinEtoolsDeforLinear.MatDeforElastOrthoModule
FinEtoolsDeforLinear.FEMMDeforLinearIMModule
``` | FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 525 | # FinEtools (Finite Element tools) Documentation
## Conceptual guide
The construction of the toolkit is described: the composition of modules, the basic data structures, the methodology of computing quantities required in the finite element methodology, and more.
```@contents
Pages = [
"guide/guide.md",
]
Depth = 1
```
## Manual
The description of the types and the functions, organized by module and/or other logical principle.
```@contents
Pages = [
"man/types.md",
"man/functions.md",
]
Depth = 2
```
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 7879 | # Cook panel under plane stress
Source code: [`Cook-plane-stress_tut.jl`](Cook-plane-stress_tut.jl)
## Description
In this example we investigate the well-known benchmark of a tapered panel
under plane stress conditions known under the name of Cook. The problem has
been solved many times with a variety of finite element models and hence the
solution is well-known.
## Goals
- Show how to generate the mesh by creating a rectangular block and reshaping it.
- Execute the simulation with a static-equilibrium algorithm (solver).
````julia
#
````
## Definitions
The problem is solved in a script. We begin by `using` the top-level module `FinEtools`.
Further, we use the linear-deformation application package.
````julia
using FinEtools
using FinEtoolsDeforLinear
````
With the algorithm modules, the problem can be set up (the materials, boundary
conditions, and mesh are defined) and handed off to an algorithm (in this
case linear static solution). Then for postprocessing another set of
algorithms can be invoked.
````julia
using FinEtoolsDeforLinear.AlgoDeforLinearModule
````
A few input parameters are defined: the material parameters. Note: the units
are consistent, but unnamed.
````julia
E = 1.0;
nu = 1.0/3;
````
The geometry of the tapered panel.
````julia
width = 48.0; height = 44.0; thickness = 1.0;
free_height = 16.0;
````
Location of tracked deflection is the midpoint of the loaded edge.
````julia
Mid_edge = [48.0, 52.0];
````
The tapered panel is loaded along the free edge with a unit force, which is
here converted to loading per unit area.
````julia
magn = 1.0/free_height/thickness;# Magnitude of applied load
````
For the above input parameters the converged displacement of the tip of the
tapered panel in the direction of the applied shear load is
````julia
convutip = 23.97;
````
The mesh is generated as a rectangular block to begin with, and then the
coordinates of the nodes are tweaked into the tapered panel shape. In this
case we are using quadratic triangles (T6).
````julia
n = 10; # number of elements per side
fens, fes = T6block(width, height, n, n)
````
Reshape the rectangle into a trapezoidal panel:
````julia
for i in 1:count(fens)
fens.xyz[i,2] += (fens.xyz[i,1]/width)*(height -fens.xyz[i,2]/height*(height-free_height));
end
````
The boundary conditions are applied to selected finite element nodes. The
selection is based on the inclusion in a selection "box".
````julia
tolerance = minimum([width, height])/n/1000.;#Geometrical tolerance
````
Clamped edge of the membrane
````julia
l1 = selectnode(fens; box=[0.,0.,-Inf, Inf], inflate = tolerance);
````
The list of the selected nodes is then used twice, to fix the degree of
freedom in the direction 1 and in the direction 2. The essential-boundary
condition data is stored in dictionaries: `ess1` and `ess2 `. These
dictionaries are used below to compose the computational model.
````julia
ess1 = FDataDict("displacement"=> 0.0, "component"=> 1, "node_list"=>l1);
ess2 = FDataDict("displacement"=> 0.0, "component"=> 2, "node_list"=>l1);
````
The traction boundary condition is applied to the finite elements on the boundary of the panel. First we generate the three-node "curve" elements on the entire boundary of the panel.
````julia
boundaryfes = meshboundary(fes);
````
Then from these finite elements we choose the ones that are inside the box
that captures the edge of the geometry to which the traction should be
applied.
````julia
Toplist = selectelem(fens, boundaryfes, box= [width, width, -Inf, Inf ], inflate= tolerance);
````
To apply the traction we create a finite element model machine (FEMM). For the
evaluation of the traction it is sufficient to create a "base" FEMM. It
consists of the geometry data `IntegDomain` (connectivity, integration rule,
evaluation of the basis functions and basis function gradients with respect
to the parametric coordinates). This object is composed of the list of the
finite elements and an appropriate quadrature rule (Gauss rule here).
````julia
el1femm = FEMMBase(IntegDomain(subset(boundaryfes, Toplist), GaussRule(1, 3), thickness));
````
The traction boundary condition is specified with a constant traction vector and the FEMM that will be used to evaluate the load vector.
````julia
flux1 = FDataDict("traction_vector"=>[0.0,+magn],
"femm"=>el1femm
);
````
We make the dictionary for the region (the interior of the domain). The FEMM
for the evaluation of the integrals over the interior of the domain (that is
the stiffness matrix) and the material are needed. The geometry data now is
equipped with the triangular three-point rule. Note the model-reduction
type which is used to dispatch to appropriate specializations of the material
routines and the FEMM which needs to execute different code for different
reduced-dimension models. Here the model reduction is "plane stress".
````julia
MR = DeforModelRed2DStress
material = MatDeforElastIso(MR, 0.0, E, nu, 0.0)
region1 = FDataDict("femm"=>FEMMDeforLinear(MR, IntegDomain(fes, TriRule(3), thickness), material));
````
The model data is a dictionary. In the present example it consists of the
node set, the array of dictionaries for the regions, and arrays of
dictionaries for each essential and natural boundary condition.
````julia
modeldata = FDataDict("fens"=>fens,
"regions"=>[region1],
"essential_bcs"=>[ess1, ess2],
"traction_bcs"=>[flux1]
);
````
When the model data is defined, we simply pass it to the algorithm.
````julia
modeldata = AlgoDeforLinearModule.linearstatics(modeldata);
````
The model data is augmented in the algorithm by the nodal field representing
the geometry and the displacement field computed by solving the system of
linear algebraic equations of equilibrium.
````julia
u = modeldata["u"];
geom = modeldata["geom"];
````
The complete information returned from the algorithm is
````julia
@show keys(modeldata)
````
Now we can extract the displacement at the mid-edge node and compare to the
converged (reference) value. The code below selects the node inside a very
small box of the size `tolerance` which presumably contains only a single
node, the one at the midpoint of the edge.
````julia
nl = selectnode(fens, box=[Mid_edge[1],Mid_edge[1],Mid_edge[2],Mid_edge[2]],
inflate=tolerance);
theutip = u.values[nl,:]
println("displacement =$(theutip[2]) as compared to converged $convutip")
````
For postprocessing we will export a VTK file with the displacement field
(vectors) and one scalar field ($\sigma_{xy}$).
````julia
modeldata["postprocessing"] = FDataDict("file"=>"cookstress",
"quantity"=>:Cauchy, "component"=>:xy);
modeldata = AlgoDeforLinearModule.exportstress(modeldata);
````
The attribute `"postprocessing"` holds additional data computed and returned
by the algorithm:
````julia
@show keys(modeldata["postprocessing"])
````
The exported data can be digested as follows: `modeldata["postprocessing"]
["exported"]` is an array of exported items.
````julia
display(keys(modeldata["postprocessing"]["exported"]))
````
Each entry of the array is a dictionary:
````julia
display(keys(modeldata["postprocessing"]["exported"][1]))
````
Provided we have `paraview` in the PATH, we can bring it up to display the
exported data.
````julia
File = modeldata["postprocessing"]["exported"][1]["file"]
@async run(`"paraview.exe" $File`);
````
We can also extract the minimum and maximum value of the shear stress
(-0.06, and 0.12).
````julia
display(modeldata["postprocessing"]["exported"][1]["quantity"])
display(modeldata["postprocessing"]["exported"][1]["component"])
fld = modeldata["postprocessing"]["exported"][1]["field"]
println("$(minimum(fld.values)) $(maximum(fld.values))")
true
````
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 7648 | # TEST FV32: Cantilevered tapered membrane, free vibration
Source code: [`FV32_tut.jl`](FV32_tut.jl)
## Description
FV32: Cantilevered tapered membrane is a test recommended by the National
Agency for Finite Element Methods and Standards (U.K.): Test FV32 from NAFEMS
publication TNSB, Rev. 3, “The Standard NAFEMS Benchmarks,” October 1990.
Reference solution: 44.623, 130.03, 162.70, 246.05, 379.90, 391.44 Hz for the
first six modes.
The benchmark is originally for plane stress conditions. We simulate the
plane-stress conditions with a three-dimensional mesh that is constrained
along one plane of nodes to effect the constrained motion only in the plane
of the trapezoidal membrane.

## References
[1] Test FV32 from NAFEMS publication TNSB, Rev. 3, “The Standard NAFEMS
Benchmarks,” October 1990.
## Goals
- Show how to generate hexahedral mesh in a rectangular block and shape it
into a trapezoid.
- Set up model data for the solution algorithms.
- Use two different finite element model machines to evaluate the stiffness
and the mass.
- Execute the modal algorithm and export the results with another algorithm.
````julia
#
````
## Definitions
Bring in the required support from the basic linear algebra, eigenvalue
solvers, and the finite element tools.
````julia
using LinearAlgebra
using Arpack
using FinEtools
using FinEtools
using FinEtoolsDeforLinear
using FinEtoolsDeforLinear: AlgoDeforLinearModule
````
The input data is given by the benchmark.
````julia
E = 200*phun("GPA");
nu = 0.3;
rho= 8000*phun("KG/M^3");
L = 10*phun("M");
W0 = 5*phun("M");
WL = 1*phun("M");
H = 0.05*phun("M");
````
We shall generate a three-dimensional mesh. It should have 1 element through
the thickness, and 8 and 4 elements in the plane of the membrane.
````julia
nL, nW, nH = 8, 4, 1;# How many element edges per side?
````
The reference frequencies are obtained from [1].
````julia
Reffs = [44.623 130.03 162.70 246.05 379.90 391.44]
````
The three-dimensional mesh of 20 node serendipity hexahedral should correspond
to the plane-stress quadratic serendipity quadrilateral (CPS8R) used in the
Abaqus benchmark. We simulate the plane-stress conditions with a
three-dimensional mesh that is constrained along one plane of nodes to effect
the constrained motion only in the plane of the trapezoidal membrane. No
bending out of plane!
First we generate mesh of a rectangular block.
````julia
fens,fes = H20block(1.0, 2.0, 1.0, nL, nW, nH)
````
Now distort the rectangular block into the tapered plate.
````julia
for i in 1:count(fens)
xi, eta, theta = fens.xyz[i,:];
eta = eta - 1.0
fens.xyz[i,:] = [xi*L eta*(1.0 - 0.8*xi)*W0/2 theta*H/2];
end
````
We can visualize the mesh with Paraview (for instance).
````julia
File = "FV32-mesh.vtk"
vtkexportmesh(File, fens, fes)
@async run(`"paraview.exe" $File`)
````
The simulation will be executed with the help of algorithms defined in the
package `FinEtoolsDeforLinear`. The algorithms accept a dictionary of model
data. The model data dictionary will be built up as follows.
First we make the interior region. The model reduction is for a three-dimensional finite element model.
````julia
MR = DeforModelRed3D
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
````
We shall create two separate finite element model machines. They are
distinguished by the quadrature rule. The mass rule, in order to evaluate the
mass matrix accurately, needs to be of higher order than the one we prefer to
use for the stiffness.
````julia
region1 = FDataDict("femm"=>FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,2)), material), "femm_mass"=>FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material))
````
Select nodes that will be clamped.
````julia
nl1 = selectnode(fens; plane=[1.0 0.0 0.0 0.0], thickness=H/1.0e4)
ebc1 = FDataDict("node_list"=>nl1, "component"=>1, "displacement"=>0.0)
ebc2 = FDataDict("node_list"=>nl1, "component"=>2, "displacement"=>0.0)
ebc3 = FDataDict("node_list"=>nl1, "component"=>3, "displacement"=>0.0)
````
Export a VTK file to visualize the selected points. Choose the
representation "Points", and select color and size approximately 4. These
notes should correspond to the clamped base of the membrane.
````julia
File = "FV32-nl1.vtk"
vtkexportmesh(File, fens, FESetP1(reshape(nl1, length(nl1), 1)))
````
Select all nodes on the plane Z = 0. This will be prevented from moving in the
Z direction.
````julia
nl4 = selectnode(fens; plane=[0.0 0.0 1.0 0.0], thickness=H/1.0e4)
ebc4 = FDataDict("node_list"=>nl4, "component"=>3, "displacement"=>0.0)
````
Export a VTK file to visualize the selected points. Choose the
representation "Points", and select color and size approximately 4. These
points all should be on the bottom face of the three-dimensional domain.
````julia
File = "FV32-nl4.vtk"
vtkexportmesh(File, fens, FESetP1(reshape(nl4, length(nl4), 1)))
````
Make model data: the nodes, the regions, the boundary conditions, and the
number of eigenvalues are set. Note that the number of eigenvalues needs to
be set to 6+N, where 6 is the number of rigid body modes, and N is the
number of deformation frequencies we are interested in.
````julia
neigvs = 10 # how many eigenvalues
modeldata = FDataDict("fens"=> fens, "regions"=> [region1], "essential_bcs"=>[ebc1 ebc2 ebc3 ebc4], "neigvs"=>neigvs)
````
Solve using an algorithm: the modal solver. The solver will supplement the
model data with the geometry and displacement fields, and the solution
(eigenvalues, eigenvectors), and the data upon return can be extracted from
the dictionary.
````julia
modeldata = AlgoDeforLinearModule.modal(modeldata)
````
Here we extract the angular velocities corresponding to the natural frequencies.
````julia
fs = modeldata["omega"]/(2*pi)
println("Eigenvalues: $fs [Hz]")
println("Percentage frequency errors: $((vec(fs[1:6]) - vec(Reffs))./vec(Reffs)*100)")
````
The problem was solved for instance with Abaqus, using plane stress eight node
elements. The results were:
| Element | Frequencies (relative errors) |
| ------- | ---------------------------- |
| CPS8R | 44.629 (0.02) 130.11 (0.06) 162.70 (0.00) 246.42 (0.15) 381.32 (0.37) 391.51 (0.02) |
Compared these numbers with those computed by our three-dimensional model.
The mode shapes may be visualized with `paraview`. Here is for instance mode
8:

The algorithm to export the mode shapes expects some input. We shall specify
the filename and the numbers of modes to export.
````julia
modeldata["postprocessing"] = FDataDict("file"=>"FV32-modes", "mode"=>1:neigvs)
modeldata = AlgoDeforLinearModule.exportmode(modeldata)
````
The algorithm attaches a little bit to the name of the exported file. If
`paraview.exe` is installed, the command below should bring up the
postprocessing file.
````julia
@async run(`"paraview.exe" $(modeldata["postprocessing"]["file"]*"1.vtk")`)
````
To animate the mode shape in `Paraview` do the following:
- Apply the filter "Warp by vector".
- Turn on the "Animation view".
- Add the mode shape data set ("WarpByVector1") by clicking the "+".
- Double-click the line with the data set. The "Animation Keyframes" dialog
will come up. Double-click "Ramp" interpolation, and change it
to "Sinusoid". Set the frequency to 1.0. Change the "Value" from 0 to 100.
- In the animation view, set the mode to "Real-time", and the duration to 4.0
seconds.
- Click on the "Play" button. If you wish, click on the "Loop" button.
````julia
true
````
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 9121 | # R0031/3 Composite plate test
Source code: [`R0031-3-Composite-benchmark_tut.jl`](R0031-3-Composite-benchmark_tut.jl)
## Description
This is a test recommended by the National Agency for Finite Element Methods and Standards (U.K.): Test R0031/3 from NAFEMS publication R0031, “Composites Benchmarks,” February 1995. It is a composite (sandwich) plate of square shape, simply supported along all four edges. Uniform transverse loading is applied to the top skin. The modeled part is one quarter of the full plate here. The serendipity quadratic hexahedra are used, with full integration.
The solution can be compared with the benchmark results in the Abaqus manual ["Abaqus Benchmarks Guide"](http://130.149.89.49:2080/v6.7/books/bmk/default.htm?startat=ch04s09anf83.html).
We begin by `using` the toolkit `FinEtools`.
````julia
using FinEtools
````
Further, we use the linear-deformation application package.
````julia
using FinEtoolsDeforLinear
````
The problem will be solved with a pre-packaged algorithm.
````julia
using FinEtoolsDeforLinear.AlgoDeforLinearModule
````
For some basic statistics.
````julia
import Statistics: mean
````
The material parameters are specified for an orthotropic material model. The units are attached using the `phun` function which can take the specification of the units and spits out the numerical multiplier. Here the benchmark specifies the input parameters in the English Imperial units. The skin material:
````julia
E1s = 1.0e7*phun("psi")
E2s = 0.4e7*phun("psi")
E3s = 0.4e7*phun("psi")
nu12s = 0.3
nu13s = 0.3
nu23s = 0.3
G12s = 0.1875e7*phun("psi")
G13s = 0.1875e7*phun("psi")
G23s = 0.1875e7*phun("psi");
````
The core material:
````julia
E1c = 10.0*phun("psi")
E2c = 10.0*phun("psi")
E3c = 10e4.*phun("psi")
nu12c = 0.
nu13c = 0.
nu23c = 0.
G12c = 10.0*phun("psi")
G13c = 3.0e4*phun("psi")
G23c = 1.2e4*phun("psi");
````
The magnitude of the distributed uniform transfers loading is
````julia
tmag = 100*phun("psi");
````
Now we generate the mesh. The sandwich plate volume is divided into a regular Cartesian grid in the $X$ and $Y$ direction in the plane of the plate, and in the thickness direction it is divided into three layers, with each layer again subdivided into multiple elements.
````julia
L = 10.0*phun("in") # side of the square plate
nL = 8 # number of elements along the side of the plate
xs = collect(linearspace(0.0, L/2, nL+1))
ys = collect(linearspace(0.0, L/2, nL+1));;
````
The thicknesses are specified from the bottom of the plate: skin, core, and then again skin.
````julia
ts = [0.028; 0.75; 0.028]*phun("in")
nts = [2; 3; 2]; # number of elements through the thickness for each layer
````
The `H8layeredplatex` meshing function generates the mesh and marks the elements with a label identifying the layer to which they belong. We will use the label to create separate regions, with their own separate materials.
````julia
fens,fes = H8layeredplatex(xs, ys, ts, nts)
````
The linear hexahedra are subsequently converted to serendipity (quadratic) elements.
````julia
fens,fes = H8toH20(fens,fes);
````
The model reduction here simply says this is a fully three-dimensional model. The two orthotropic materials are created.
````julia
MR = DeforModelRed3D
skinmaterial = MatDeforElastOrtho(MR,
0.0, E1s, E2s, E3s,
nu12s, nu13s, nu23s,
G12s, G13s, G23s,
0.0, 0.0, 0.0)
corematerial = MatDeforElastOrtho(MR,
0.0, E1c, E2c, E3c,
nu12c, nu13c, nu23c,
G12c, G13c, G23c,
0.0, 0.0, 0.0);
````
Now we are ready to create three material regions: one for the bottom skin, one for the core, and one for the top skin. The selection of the finite elements assigned to each of the three regions is based on the label. Full Gauss quadrature is used.
````julia
rl1 = selectelem(fens, fes, label=1)
skinbot = FDataDict("femm"=>FEMMDeforLinear(MR,
IntegDomain(subset(fes, rl1), GaussRule(3, 3)), skinmaterial))
rl3 = selectelem(fens, fes, label=3)
skintop = FDataDict("femm"=>FEMMDeforLinear(MR,
IntegDomain(subset(fes, rl3), GaussRule(3, 3)), skinmaterial))
rl2 = selectelem(fens, fes, label=2)
core = FDataDict("femm"=>FEMMDeforLinear(MR,
IntegDomain(subset(fes, rl2), GaussRule(3, 3)), corematerial));
````
Note that since we did not specify the material coordinate system, the default is assumed (which is identical to the global Cartesian coordinate system).
````julia
@show skinbot["femm"].mcsys
````
Next we select the nodes to which essential boundary conditions will be applied. A node is selected if it is within the specified box which for the purpose of the test is inflated in all directions by `tolerance`. The nodes on the planes of symmetry need to be selected, and also the nodes along the edges (faces) to be simply supported need to be identified.
````julia
tolerance = 0.0001*phun("in")
lx0 = selectnode(fens, box=[0.0 0.0 -Inf Inf -Inf Inf], inflate=tolerance)
lxL2 = selectnode(fens, box=[L/2 L/2 -Inf Inf -Inf Inf], inflate=tolerance)
ly0 = selectnode(fens, box=[-Inf Inf 0.0 0.0 -Inf Inf], inflate=tolerance)
lyL2 = selectnode(fens, box=[-Inf Inf L/2 L/2 -Inf Inf], inflate=tolerance);
````
We have four sides of the quarter of the plate, two on each plane of symmetry, and two along the circumference. Hence we create four essential boundary condition definitions, one for each of the sides of the plate.
````julia
ex0 = FDataDict( "displacement"=> 0.0, "component"=> 3, "node_list"=>lx0 )
exL2 = FDataDict( "displacement"=> 0.0, "component"=> 1, "node_list"=>lxL2 )
ey0 = FDataDict( "displacement"=> 0.0, "component"=> 3, "node_list"=>ly0 )
eyL2 = FDataDict( "displacement"=> 0.0, "component"=> 2, "node_list"=>lyL2 );
````
The traction on the top surface of the top skin is applied to the subset of the surface mesh of the entire domain. First we compute the boundary mesh, and then from the boundary mesh we select the surface finite elements that "face" upward (along the positive $Z$ axis).
````julia
bfes = meshboundary(fes)
ttopl = selectelem(fens, bfes; facing=true, direction = [0.0 0.0 1.0])
Trac = FDataDict("traction_vector"=>[0.0; 0.0; -tmag],
"femm"=>FEMMBase(IntegDomain(subset(bfes, ttopl), GaussRule(2, 3))));
````
The model data is composed of the finite element nodes, an array of the regions, an array of the essential boundary condition definitions, and an array of the traction (natural) boundary condition definitions.
````julia
modeldata = FDataDict("fens"=>fens,
"regions"=>[skinbot, core, skintop], "essential_bcs"=>[ex0, exL2, ey0, eyL2],
"traction_bcs"=> [Trac]
);
````
With the model data assembled, we can now call the algorithm.
````julia
modeldata = AlgoDeforLinearModule.linearstatics(modeldata);
````
The computed solution can now be postprocessed. The displacement is reported at the center of the plate, along the line in the direction of the loading. We select all the nodes along this line.
````julia
u = modeldata["u"]
geom = modeldata["geom"]
lcenter = selectnode(fens, box=[L/2 L/2 L/2 L/2 -Inf Inf], inflate=tolerance);
````
The variation of the displacement along this line can be plotted as (the bottom surface of the shell is at $Z=0$):
````julia
ix = sortperm(geom.values[lcenter, 3])
````
Plot the data
````julia
using Gnuplot
Gnuplot.gpexec("reset session")
@gp "set terminal windows 0 " :-
@gp :- geom.values[lcenter, 3][ix] u.values[lcenter, 3][ix]./phun("in") " lw 2 with lp title 'cold leg' " :-
@gp :- "set xlabel 'Z coordinate [in]'" :-
@gp :- "set ylabel 'Vert displ [in]'"
````
A reasonable single number to report for the deflection at the center is the average of the displacements at the nodes at the center of the plate (-0.136348):
````julia
cdis = mean(u.values[lcenter, 3])/phun("in");
println("Center node displacements $(cdis) [in]; NAFEMS-R0031-3 reference: –0.123 [in]")
````
The reference displacement at the center of -0.123 [in] reported for the benchmark is evaluated from an analytical formulation that neglects transverse (pinching) deformation. Due to the soft core, significant pinching is observed. The solution to the benchmark obtained in Abaqus with incompatible hexahedral elements (with the same number of elements as in the stacked continuum shell solution) is -0.131 [in], which is close to our own solution. Hence, our own solution is probably more accurate than the reference solution because it includes an effect neglected in the benchmark solution.
The deformed shape can be investigated visually in `paraview` (uncomment the line at the bottom if you have `paraview` in your PATH):
````julia
File = "NAFEMS-R0031-3-plate.vtk"
vtkexportmesh(File, connasarray(fes), geom.values, FinEtools.MeshExportModule.VTK.H20;
scalars = [("Layer", fes.label)], vectors = [("displacement", u.values)])
````
@async run(`"paraview.exe" $File`);
Note that the VTK file will contain element labels (which can help us distinguish between the layers) as scalar field, and the displacements as a vector field.
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 9659 | # TEST 13H: square plate under harmonic loading. Modal model
Source code: [`TEST13H_mod_tut.jl`](TEST13H_mod_tut.jl)
## Description
Harmonic forced vibration problem is solved for a homogeneous square plate,
simply-supported on the circumference. This is the TEST 13H from the Abaqus v
6.12 Benchmarks manual. The test is recommended by the National Agency for
Finite Element Methods and Standards (U.K.): Test 13 from NAFEMS “Selected
Benchmarks for Forced Vibration,” R0016, March 1993.
The plate is discretized with hexahedral solid elements. The simple support
condition is approximated by distributed rollers on the boundary. Because
only the out of plane displacements are prevented, the structure has three
rigid body modes in the plane of the plate.
Homogeneous square plate, simply-supported on the circumference from the test
13 from NAFEMS “Selected Benchmarks for Forced Vibration,” R0016, March 1993.
The nonzero benchmark frequencies are (in hertz): 2.377, 5.961, 5.961, 9.483,
12.133, 12.133, 15.468, 15.468 [Hz].
The magnitude of the displacement for the fundamental frequency (2.377 Hz) is
45.42mm according to the reference solution.

This is the so-called modal model: the response is expressed as a linear
combination of the eigenvectors. The finite element model is transformed into
the modal space. The reduced matrices are in fact diagonal for the Rayleigh
damping. The model natural frequencies can be evaluated fairly quickly, given
the small size of the model, and a lot more frequencies can therefore be
processed in the frequency sweep.
## References
[1] NAFEMS “Selected Benchmarks for Forced Vibration,” R0016, March 1993.
## Goals
- Show how to generate hexahedral mesh, mirroring and merging together parts.
- Execute frequency sweep using a modal model.
````julia
#
````
## Definitions
Bring in required support.
````julia
using FinEtools
using FinEtools.AlgoBaseModule: matrix_blocked, vector_blocked
using FinEtoolsDeforLinear
using LinearAlgebra
using Arpack
````
The input parameters come from [1].
````julia
E = 200*phun("GPa");# Young's modulus
nu = 0.3;# Poisson ratio
rho = 8000*phun("kg*m^-3");# mass density
qmagn = 100.0*phun("Pa")
L = 10.0*phun("m"); # side of the square plate
t = 0.05*phun("m"); # thickness of the square plate
nL = 16; nt = 4;
tolerance = t/nt/100;
frequencies = vcat(linearspace(0.0,2.377,150), linearspace(2.377,15.0,400))
````
Compute the parameters of Rayleigh damping. For the two selected
frequencies we have the relationship between the damping ratio and
the Rayleigh parameters
$\xi_m=a_0/\omega_m+a_1\omega_m$
where $m=1,2$. Solving for the Rayleigh parameters $a_0,a_1$ yields:
````julia
zeta1 = 0.02; zeta2 = 0.02;
o1 = 2*pi*2.377; o2 = 2*pi*15.468;
a0 = 2*(o1*o2)/(o2^2-o1^2)*(o2*zeta1-o1*zeta2);# a0
a1 = 2*(o1*o2)/(o2^2-o1^2)*(-1/o2*zeta1+1/o1*zeta2);# a1
#
````
## Discrete model
Generate the finite element domain as a block.
````julia
fens,fes = H8block(L, L, t, nL, nL, nt)
````
Create the geometry field.
````julia
geom = NodalField(fens.xyz)
````
Create the displacement field. Note that it holds complex numbers.
````julia
u = NodalField(zeros(ComplexF64, size(fens.xyz,1), 3)) # displacement field
````
In order to apply the essential boundary conditions we need to select the
nodes along the side faces of the plate and support them in the Z direction.
````julia
nl = selectnode(fens, box=[0.0 0.0 -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
nl = selectnode(fens, box=[L L -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
nl = selectnode(fens, box=[-Inf Inf 0.0 0.0 -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
nl = selectnode(fens, box=[-Inf Inf L L -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
````
Those boundary conditions can now be applied to the displacement field,...
````julia
applyebc!(u)
````
... and the degrees of freedom can be numbered.
````julia
numberdofs!(u)
println("nfreedofs = $(nfreedofs(u))")
````
The model is three-dimensional.
````julia
MR = DeforModelRed3D
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
````
Given how relatively thin the plate is we choose an effective element: the
mean-strain hexahedral element which is quite tolerant of the high aspect
ratio.
````julia
femm = FEMMDeforLinearMSH8(MR, IntegDomain(fes, GaussRule(3,2)), material)
````
These elements require to know the geometry before anything else can be
computed using them in a finite element machine. Hence we first need to
associate the geometry with the FEMM.
````julia
femm = associategeometry!(femm, geom)
````
Now we can calculate the stiffness matrix and the mass matrix: both evaluated
with the high-order Gauss rule.
````julia
K = stiffness(femm, geom, u)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, geom, u)
````
The damping matrix is a linear combination of the mass matrix and the
stiffness matrix (Rayleigh model).
````julia
C = a0*M + a1*K
````
Extract the free-free block of the matrices.
````julia
M_ff = matrix_blocked(M, nfreedofs(u))[:ff]
K_ff = matrix_blocked(K, nfreedofs(u))[:ff]
C_ff = matrix_blocked(C, nfreedofs(u))[:ff]
````
Find the boundary finite elements on top of the plate. The uniform distributed
loading will be applied to these elements.
````julia
bdryfes = meshboundary(fes)
````
Those facing up (in the positive Z direction) will be chosen:
````julia
topbfl = selectelem(fens, bdryfes, facing=true, direction=[0.0 0.0 1.0])
````
A base finite element model machine will be created to evaluate the loading.
The force intensity is created as driven by a function, but the function
really only just fills the buffer with the constant loading vector.
````julia
function pfun(forceout::Vector{T}, XYZ, tangents, feid, qpid) where {T}
forceout .= [0.0, 0.0, qmagn]
return forceout
end
fi = ForceIntensity(Float64, 3, pfun);
````
The loading vector is lumped from the distributed uniform loading by
integrating on the boundary. Hence, the dimension of the integration domain
is 2.
````julia
el1femm = FEMMBase(IntegDomain(subset(bdryfes,topbfl), GaussRule(2,2)))
F = distribloads(el1femm, geom, u, fi, 2);
F_f = vector_blocked(F, nfreedofs(u))[:f]
#
````
## Transformation into the modal model
We will have to find the natural frequencies and mode shapes. Without much
justification, we picked 60 natural frequencies to include. Usually this
needs to be done carefully so that nothing important is missed. In this case
the number may be an overkill.
````julia
OmegaShift = (0.01*2*pi)^2; # The frequency with which to shift
neigvs = 60
t0 = time()
evals, evecs, nconv = eigs(K_ff+OmegaShift*M_ff, M_ff; nev=neigvs, which=:SM)
evals .-= OmegaShift
@show tep = time() - t0
@show nconv == neigvs
````
Now the matrices are reduced. In fact, if we are sure of the diagonal
character of all these matrices, we can really only store diagonal
matrices and even the solve of the modal equations of motion becomes trivial.
````julia
Mr = evecs' * M_ff * evecs
Kr = evecs' * K_ff * evecs
Cr = evecs' * C_ff * evecs
````
The loading also needs to be transformed (projected) into the modal vector
space.
````julia
Fr = evecs' * F_f
#
````
## Sweep through the frequencies
Sweep through the frequencies and calculate the complex displacement vector
for each of the frequencies from the complex balance equations of the
structure.
The entire solution will be stored in this array:
````julia
U1 = zeros(ComplexF64, nfreedofs(u), length(frequencies))
print("Sweeping through $(length(frequencies)) frequencies\n")
t0 = time()
for k in 1:length(frequencies)
f = frequencies[k];
omega = 2*pi*f;
````
Solve the reduced equations.
````julia
Ur = (-omega^2*Mr + 1im*omega*Cr + Kr)\Fr;
````
Reconstruct the solution in the finite element space.
````julia
U1[:, k] = evecs * Ur;
print(".")
end
print("\nTime = $(time()-t0)\n")
````
Find the midpoint of the plate bottom surface. For this purpose the number of
elements along the edge of the plate needs to be divisible by two.
````julia
midpoint = selectnode(fens, box=[L/2 L/2 L/2 L/2 0 0], inflate=tolerance);
````
Check that we found that node.
````julia
@assert midpoint != []
````
Extract the displacement component in the vertical direction (Z).
````julia
midpointdof = u.dofnums[midpoint, 3]
#
````
## Plot the results
````julia
using Gnuplot
````
Plot the amplitude of the FRF.
````julia
umidAmpl = abs.(U1[midpointdof, :])/phun("MM")
@gp "set terminal windows 0 " :-
@gp :- vec(frequencies) vec(umidAmpl) "lw 2 lc rgb 'red' with lines title 'Displacement of the corner' " :-
@gp :- "set xlabel 'Frequency [Hz]'" :-
@gp :- "set ylabel 'Midpoint displacement amplitude [mm]'"
````
Plot the FRF real and imaginary components.
````julia
umidReal = real.(U1[midpointdof, :])/phun("MM")
umidImag = imag.(U1[midpointdof, :])/phun("MM")
@gp "set terminal windows 1 " :-
@gp :- vec(frequencies) vec(umidReal) "lw 2 lc rgb 'red' with lines title 'Real' " :-
@gp :- vec(frequencies) vec(umidImag) "lw 2 lc rgb 'blue' with lines title 'Imaginary' " :-
@gp :- "set xlabel 'Frequency [Hz]'" :-
@gp :- "set ylabel 'Midpoint displacement FRF [mm]'"
````
# Plot the shift of the FRF.
````julia
umidPhase = atan.(umidImag, umidReal)/pi*180
@gp "set terminal windows 2 " :-
@gp :- vec(frequencies) vec(umidPhase) "lw 2 lc rgb 'red' with lines title 'Phase shift' " :-
@gp :- "set xlabel 'Frequency [Hz]'" :-
@gp :- "set ylabel 'Phase shift [deg]'"
true
````
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 9022 | # TEST 13H: square plate under harmonic loading. Parallel execution.
Source code: [`TEST13H_par_tut.jl`](TEST13H_par_tut.jl)
## Description
Harmonic forced vibration problem is solved for a homogeneous square plate,
simply-supported on the circumference. This is the TEST 13H from the Abaqus v
6.12 Benchmarks manual. The test is recommended by the National Agency for
Finite Element Methods and Standards (U.K.): Test 13 from NAFEMS “Selected
Benchmarks for Forced Vibration,” R0016, March 1993.
The plate is discretized with hexahedral solid elements. The simple support
condition is approximated by distributed rollers on the boundary. Because
only the out of plane displacements are prevented, the structure has three
rigid body modes in the plane of the plate.
Homogeneous square plate, simply-supported on the circumference from the test
13 from NAFEMS “Selected Benchmarks for Forced Vibration,” R0016, March 1993.
The nonzero benchmark frequencies are (in hertz): 2.377, 5.961, 5.961, 9.483,
12.133, 12.133, 15.468, 15.468 [Hz].
The magnitude of the displacement for the fundamental frequency (2.377 Hz) is
45.42mm according to the reference solution.

The harmonic response loop is processed with multiple threads. The algorithm
is embarrassingly parallel (i. e. no communication is required). Hence the
parallel execution is particularly simple.
## References
[1] NAFEMS “Selected Benchmarks for Forced Vibration,” R0016, March 1993.
## Goals
- Show how to generate hexahedral mesh, mirroring and merging together parts.
- Execute transient simulation by the trapezoidal-rule time stepping of [1].
````julia
#
````
## Definitions
Bring in required support.
````julia
using FinEtools
using FinEtools.AlgoBaseModule: matrix_blocked, vector_blocked
using FinEtoolsDeforLinear
using LinearAlgebra
using Arpack
````
The input parameters come from [1].
````julia
E = 200*phun("GPa");# Young's modulus
nu = 0.3;# Poisson ratio
rho = 8000*phun("kg*m^-3");# mass density
qmagn = 100.0*phun("Pa")
L = 10.0*phun("m"); # side of the square plate
t = 0.05*phun("m"); # thickness of the square plate
nL = 16; nt = 4;
tolerance = t/nt/100;
frequencies = vcat(linearspace(0.0,2.377,15), linearspace(2.377,15.0,40))
````
Compute the parameters of Rayleigh damping. For the two selected
frequencies we have the relationship between the damping ratio and
the Rayleigh parameters
$\xi_m=a_0/\omega_m+a_1\omega_m$
where $m=1,2$. Solving for the Rayleigh parameters $a_0,a_1$ yields:
````julia
zeta1 = 0.02; zeta2 = 0.02;
o1 = 2*pi*2.377; o2 = 2*pi*15.468;
a0 = 2*(o1*o2)/(o2^2-o1^2)*(o2*zeta1-o1*zeta2);# a0
a1 = 2*(o1*o2)/(o2^2-o1^2)*(-1/o2*zeta1+1/o1*zeta2);# a1
#
````
## Discrete model
Generate the finite element domain as a block.
````julia
fens,fes = H8block(L, L, t, nL, nL, nt)
````
Create the geometry field.
````julia
geom = NodalField(fens.xyz)
````
Create the displacement field. Note that it holds complex numbers.
````julia
u = NodalField(zeros(ComplexF64, size(fens.xyz,1), 3)) # displacement field
````
In order to apply the essential boundary conditions we need to select the
nodes along the side faces of the plate and support them in the Z direction.
````julia
nl = selectnode(fens, box=[0.0 0.0 -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
nl = selectnode(fens, box=[L L -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
nl = selectnode(fens, box=[-Inf Inf 0.0 0.0 -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
nl = selectnode(fens, box=[-Inf Inf L L -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
````
Those boundary conditions can now be applied to the displacement field,...
````julia
applyebc!(u)
````
... and the degrees of freedom can be numbered.
````julia
numberdofs!(u)
println("nfreedofs = $(nfreedofs(u))")
````
The model is three-dimensional.
````julia
MR = DeforModelRed3D
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
````
Given how relatively thin the plate is we choose an effective element: the
mean-strain hexahedral element which is quite tolerant of the high aspect
ratio.
````julia
femm = FEMMDeforLinearMSH8(MR, IntegDomain(fes, GaussRule(3,2)), material)
````
These elements require to know the geometry before anything else can be
computed using them in a finite element machine. Hence we first need to
associate the geometry with the FEMM.
````julia
femm = associategeometry!(femm, geom)
````
Now we can calculate the stiffness matrix and the mass matrix: both evaluated
with the high-order Gauss rule.
````julia
K = stiffness(femm, geom, u)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, geom, u)
````
The damping matrix is a linear combination of the mass matrix and the
stiffness matrix (Rayleigh model).
````julia
C = a0*M + a1*K
````
Extract the free-free block of the matrices.
````julia
M_ff = matrix_blocked(M, nfreedofs(u))[:ff]
K_ff = matrix_blocked(K, nfreedofs(u))[:ff]
C_ff = matrix_blocked(C, nfreedofs(u))[:ff]
````
Find the boundary finite elements on top of the plate. The uniform distributed
loading will be applied to these elements.
````julia
bdryfes = meshboundary(fes)
````
Those facing up (in the positive Z direction) will be chosen:
````julia
topbfl = selectelem(fens, bdryfes, facing=true, direction=[0.0 0.0 1.0])
````
A base finite element model machine will be created to evaluate the loading.
The force intensity is created as driven by a function, but the function
really only just fills the buffer with the constant loading vector.
````julia
function pfun(forceout::Vector{T}, XYZ, tangents, feid, qpid) where {T}
forceout .= [0.0, 0.0, qmagn]
return forceout
end
fi = ForceIntensity(Float64, 3, pfun);
````
The loading vector is lumped from the distributed uniform loading by
integrating on the boundary. Hence, the dimension of the integration domain
is 2.
````julia
el1femm = FEMMBase(IntegDomain(subset(bdryfes,topbfl), GaussRule(2,2)))
F = distribloads(el1femm, geom, u, fi, 2);
F_f = vector_blocked(F, nfreedofs(u))[:f]
#
````
## Sweep through the frequencies
Sweep through the frequencies and calculate the complex displacement vector
for each of the frequencies from the complex balance equations of the
structure.
The entire solution will be stored in this array:
````julia
U1 = zeros(ComplexF64, nfreedofs(u), length(frequencies))
````
It is best to prevent the BLAS library from using threads concurrently with
our own use of threads. The threads might easily become oversubscribed, with
attendant slowdown.
````julia
LinearAlgebra.BLAS.set_num_threads(1)
````
We utilize all the threads with which Julia was started. We can select the
number of threads to use by running the executable as `julia -t n`, where `n`
is the number of threads.
````julia
using Base.Threads
print("Number of threads: $(nthreads())\n")
print("Sweeping through $(length(frequencies)) frequencies\n")
t0 = time()
Threads.@threads for k in 1:length(frequencies)
f = frequencies[k];
omega = 2*pi*f;
U1[:, k] = (-omega^2*M_ff + 1im*omega*C_ff + K_ff)\F_f;
print(".")
end
print("\nTime = $(time()-t0)\n")
````
On Windows the scaling is not great, which is not Julia's fault, but rather
the operating system's failing. Linux usually gives much greater speedups.
Find the midpoint of the plate bottom surface. For this purpose the number of
elements along the edge of the plate needs to be divisible by two.
````julia
midpoint = selectnode(fens, box=[L/2 L/2 L/2 L/2 0 0], inflate=tolerance);
````
Check that we found that node.
````julia
@assert midpoint != []
````
Extract the displacement component in the vertical direction (Z).
````julia
midpointdof = u.dofnums[midpoint, 3]
#
````
## Plot the results
````julia
using Gnuplot
````
Plot the amplitude of the FRF.
````julia
umidAmpl = abs.(U1[midpointdof, :])/phun("MM")
@gp "set terminal windows 0 " :-
@gp :- vec(frequencies) vec(umidAmpl) "lw 2 lc rgb 'red' with lines title 'Displacement of the corner' " :-
@gp :- "set xlabel 'Frequency [Hz]'" :-
@gp :- "set ylabel 'Midpoint displacement amplitude [mm]'"
````
Plot the FRF real and imaginary components.
````julia
umidReal = real.(U1[midpointdof, :])/phun("MM")
umidImag = imag.(U1[midpointdof, :])/phun("MM")
@gp "set terminal windows 1 " :-
@gp :- vec(frequencies) vec(umidReal) "lw 2 lc rgb 'red' with lines title 'Real' " :-
@gp :- vec(frequencies) vec(umidImag) "lw 2 lc rgb 'blue' with lines title 'Imaginary' " :-
@gp :- "set xlabel 'Frequency [Hz]'" :-
@gp :- "set ylabel 'Midpoint displacement FRF [mm]'"
````
# Plot the shift of the FRF.
````julia
umidPhase = atan.(umidImag, umidReal)/pi*180
@gp "set terminal windows 2 " :-
@gp :- vec(frequencies) vec(umidPhase) "lw 2 lc rgb 'red' with lines title 'Phase shift' " :-
@gp :- "set xlabel 'Frequency [Hz]'" :-
@gp :- "set ylabel 'Phase shift [deg]'"
true
````
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 8142 | # TEST 13H: square plate under harmonic loading
Source code: [`TEST13H_tut.jl`](TEST13H_tut.jl)
## Description
Harmonic forced vibration problem is solved for a homogeneous square plate,
simply-supported on the circumference. This is the TEST 13H from the Abaqus v
6.12 Benchmarks manual. The test is recommended by the National Agency for
Finite Element Methods and Standards (U.K.): Test 13 from NAFEMS “Selected
Benchmarks for Forced Vibration,” R0016, March 1993.
The plate is discretized with hexahedral solid elements. The simple support
condition is approximated by distributed rollers on the boundary. Because
only the out of plane displacements are prevented, the structure has three
rigid body modes in the plane of the plate.
Homogeneous square plate, simply-supported on the circumference from the test
13 from NAFEMS “Selected Benchmarks for Forced Vibration,” R0016, March 1993.
The nonzero benchmark frequencies are (in hertz): 2.377, 5.961, 5.961, 9.483,
12.133, 12.133, 15.468, 15.468 [Hz].
The magnitude of the displacement for the fundamental frequency (2.377 Hz) is
45.42mm according to the reference solution.

## References
[1] NAFEMS “Selected Benchmarks for Forced Vibration,” R0016, March 1993.
## Goals
- Show how to generate hexahedral mesh, mirroring and merging together parts.
- Execute transient simulation by the trapezoidal-rule time stepping of [1].
````julia
#
````
## Definitions
Bring in required support.
````julia
using FinEtools
using FinEtools.AlgoBaseModule: matrix_blocked, vector_blocked
using FinEtoolsDeforLinear
using LinearAlgebra
using Arpack
````
The input parameters come from [1].
````julia
E = 200*phun("GPa");# Young's modulus
nu = 0.3;# Poisson ratio
rho = 8000*phun("kg*m^-3");# mass density
qmagn = 100.0*phun("Pa")
L = 10.0*phun("m"); # side of the square plate
t = 0.05*phun("m"); # thickness of the square plate
nL = 16; nt = 4;
tolerance = t/nt/100;
frequencies = vcat(linearspace(0.0,2.377,15), linearspace(2.377,15.0,40))
````
Compute the parameters of Rayleigh damping. For the two selected
frequencies we have the relationship between the damping ratio and
the Rayleigh parameters
$\xi_m=a_0/\omega_m+a_1\omega_m$
where $m=1,2$. Solving for the Rayleigh parameters $a_0,a_1$ yields:
````julia
zeta1 = 0.02; zeta2 = 0.02;
o1 = 2*pi*2.377; o2 = 2*pi*15.468;
a0 = 2*(o1*o2)/(o2^2-o1^2)*(o2*zeta1-o1*zeta2);# a0
a1 = 2*(o1*o2)/(o2^2-o1^2)*(-1/o2*zeta1+1/o1*zeta2);# a1
#
````
## Discrete model
Generate the finite element domain as a block.
````julia
fens,fes = H8block(L, L, t, nL, nL, nt)
````
Create the geometry field.
````julia
geom = NodalField(fens.xyz)
````
Create the displacement field. Note that it holds complex numbers.
````julia
u = NodalField(zeros(ComplexF64, size(fens.xyz,1), 3)) # displacement field
````
In order to apply the essential boundary conditions we need to select the
nodes along the side faces of the plate and support them in the Z direction.
````julia
nl = selectnode(fens, box=[0.0 0.0 -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
nl = selectnode(fens, box=[L L -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
nl = selectnode(fens, box=[-Inf Inf 0.0 0.0 -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
nl = selectnode(fens, box=[-Inf Inf L L -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 3)
````
Those boundary conditions can now be applied to the displacement field,...
````julia
applyebc!(u)
````
... and the degrees of freedom can be numbered.
````julia
numberdofs!(u)
println("nfreedofs = $(nfreedofs(u))")
````
The model is three-dimensional.
````julia
MR = DeforModelRed3D
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
````
Given how relatively thin the plate is we choose an effective element: the
mean-strain hexahedral element which is quite tolerant of the high aspect
ratio.
````julia
femm = FEMMDeforLinearMSH8(MR, IntegDomain(fes, GaussRule(3,2)), material)
````
These elements require to know the geometry before anything else can be
computed using them in a finite element machine. Hence we first need to
associate the geometry with the FEMM.
````julia
femm = associategeometry!(femm, geom)
````
Now we can calculate the stiffness matrix and the mass matrix: both evaluated
with the high-order Gauss rule.
````julia
K = stiffness(femm, geom, u)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, geom, u)
````
The damping matrix is a linear combination of the mass matrix and the
stiffness matrix (Rayleigh model).
````julia
C = a0*M + a1*K
````
Extract the free-free block of the matrices.
````julia
M_ff = matrix_blocked(M, nfreedofs(u))[:ff]
K_ff = matrix_blocked(K, nfreedofs(u))[:ff]
C_ff = matrix_blocked(C, nfreedofs(u))[:ff]
````
Find the boundary finite elements on top of the plate. The uniform distributed
loading will be applied to these elements.
````julia
bdryfes = meshboundary(fes)
````
Those facing up (in the positive Z direction) will be chosen:
````julia
topbfl = selectelem(fens, bdryfes, facing=true, direction=[0.0 0.0 1.0])
````
A base finite element model machine will be created to evaluate the loading.
The force intensity is created as driven by a function, but the function
really only just fills the buffer with the constant loading vector.
````julia
function pfun(forceout::Vector{T}, XYZ, tangents, feid, qpid) where {T}
forceout .= [0.0, 0.0, qmagn]
return forceout
end
fi = ForceIntensity(Float64, 3, pfun);
````
The loading vector is lumped from the distributed uniform loading by
integrating on the boundary. Hence, the dimension of the integration domain
is 2.
````julia
el1femm = FEMMBase(IntegDomain(subset(bdryfes,topbfl), GaussRule(2,2)))
F = distribloads(el1femm, geom, u, fi, 2);
F_f = vector_blocked(F, nfreedofs(u))[:f]
#
````
## Sweep through the frequencies
Sweep through the frequencies and calculate the complex displacement vector
for each of the frequencies from the complex balance equations of the
structure.
The entire solution will be stored in this array:
````julia
U1 = zeros(ComplexF64, nfreedofs(u), length(frequencies))
print("Sweeping through $(length(frequencies)) frequencies\n")
t0 = time()
for k in 1:length(frequencies)
f = frequencies[k];
omega = 2*pi*f;
U1[:, k] = (-omega^2*M_ff + 1im*omega*C_ff + K_ff)\F_f;
print(".")
end
print("\nTime = $(time()-t0)\n")
````
Find the midpoint of the plate bottom surface. For this purpose the number of
elements along the edge of the plate needs to be divisible by two.
````julia
midpoint = selectnode(fens, box=[L/2 L/2 L/2 L/2 0 0], inflate=tolerance);
````
Check that we found that node.
````julia
@assert midpoint != []
````
Extract the displacement component in the vertical direction (Z).
````julia
midpointdof = u.dofnums[midpoint, 3]
#
````
## Plot the results
````julia
using Gnuplot
````
Plot the amplitude of the FRF.
````julia
umidAmpl = abs.(U1[midpointdof, :])/phun("MM")
@gp "set terminal windows 0 " :-
@gp :- vec(frequencies) vec(umidAmpl) "lw 2 lc rgb 'red' with lines title 'Displacement of the corner' " :-
@gp :- "set xlabel 'Frequency [Hz]'" :-
@gp :- "set ylabel 'Midpoint displacement amplitude [mm]'"
````
Plot the FRF real and imaginary components.
````julia
umidReal = real.(U1[midpointdof, :])/phun("MM")
umidImag = imag.(U1[midpointdof, :])/phun("MM")
@gp "set terminal windows 1 " :-
@gp :- vec(frequencies) vec(umidReal) "lw 2 lc rgb 'red' with lines title 'Real' " :-
@gp :- vec(frequencies) vec(umidImag) "lw 2 lc rgb 'blue' with lines title 'Imaginary' " :-
@gp :- "set xlabel 'Frequency [Hz]'" :-
@gp :- "set ylabel 'Midpoint displacement FRF [mm]'"
````
# Plot the shift of the FRF.
````julia
umidPhase = atan.(umidImag, umidReal)/pi*180
@gp "set terminal windows 2 " :-
@gp :- vec(frequencies) vec(umidPhase) "lw 2 lc rgb 'red' with lines title 'Phase shift' " :-
@gp :- "set xlabel 'Frequency [Hz]'" :-
@gp :- "set ylabel 'Phase shift [deg]'"
true
````
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 7109 | # Beam under on/off loading: transient response
Source code: [`beam_load_on_off_tut.jl`](beam_load_on_off_tut.jl)
## Description
A cantilever beam is loaded by a trapezoidal-pulse traction load at its free
cross-section. The load is applied within 0.015 seconds and taken off after
0.37 seconds. The beam oscillates about its equilibrium configuration.
The beam is modeled as a solid. Trapezoidal rule is used to integrate the
equations of motion in time. Rayleigh mass- and stiffness-proportional
damping is incorporated. The dynamic stiffness is factorized for efficiency.

## Goals
- Show how to create the discrete model, with implicit dynamics and proportional damping.
- Apply distributed loading varying in time.
- Demonstrate trapezoidal-rule time stepping.
````julia
#
````
## Definitions
Basic imports.
````julia
using LinearAlgebra
using Arpack
````
This is the finite element toolkit itself.
````julia
using FinEtools
using FinEtools.AlgoBaseModule: matrix_blocked, vector_blocked
````
The linear stress analysis application is implemented in this package.
````julia
using FinEtoolsDeforLinear
using FinEtoolsDeforLinear.AlgoDeforLinearModule
````
Input parameters
````julia
E = 205000*phun("MPa");# Young's modulus
nu = 0.3;# Poisson ratio
rho = 7850*phun("KG*M^-3");# mass density
loss_tangent = 0.005;
L = 200*phun("mm");
W = 4*phun("mm");
H = 8*phun("mm");
tolerance = W/500;
qmagn = 0.1*phun("MPa");
tend = 0.5*phun("SEC");
#
````
## Create the discrete model
````julia
MR = DeforModelRed3D
fens,fes = H8block(L, W, H, 50, 2, 4)
geom = NodalField(fens.xyz)
u = NodalField(zeros(size(fens.xyz,1),3)) # displacement field
nl = selectnode(fens, box=[L L -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 1)
setebc!(u, nl, true, 2)
setebc!(u, nl, true, 3)
applyebc!(u)
numberdofs!(u)
corner = selectnode(fens, nearestto=[0 0 0])
cornerzdof = u.dofnums[corner[1], 3]
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
femm = FEMMDeforLinearMSH8(MR, IntegDomain(fes, GaussRule(3,2)), material)
femm = associategeometry!(femm, geom)
K = stiffness(femm, geom, u)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, geom, u)
````
Extract the free-free block of the matrices.
````julia
M_ff = matrix_blocked(M, nfreedofs(u))[:ff]
K_ff = matrix_blocked(K, nfreedofs(u))[:ff]
````
Find the boundary finite elements at the tip cross-section of the beam. The
uniform distributed loading will be applied to these elements.
````julia
bdryfes = meshboundary(fes)
````
Those facing in the positive X direction will be chosen:
````julia
tipbfl = selectelem(fens, bdryfes, facing=true, direction=[-1.0 0.0 0.0])
````
A base finite element model machine will be created to evaluate the loading.
The force intensity is created as driven by a function, but the function
really only just fills the buffer with the constant loading vector.
````julia
function pfun(forceout::Vector{T}, XYZ, tangents, feid, qpid) where {T}
forceout .= [0.0, 0.0, qmagn]
return forceout
end
fi = ForceIntensity(Float64, 3, pfun);
````
The loading vector is lumped from the distributed uniform loading by
integrating on the boundary. Hence, the dimension of the integration domain
is 2.
````julia
el1femm = FEMMBase(IntegDomain(subset(bdryfes,tipbfl), GaussRule(2,2)))
F = distribloads(el1femm, geom, u, fi, 2);
F_f, F_d = vector_blocked(F, nfreedofs(u))[(:f, :d)]
````
The loading function is defined as a time -dependent multiplier of the
constant distribution of the loading on the structure.
````julia
function tmult(t)
if (t <= 0.015)
t/0.015
else
if (t >= 0.4)
0.0
else
if (t <= 0.385)
1.0
else
(t - 0.4)/(0.385 - 0.4)
end
end
end
end
#
````
## Time step determination
We figure out the fundamental mode frequency, which will determine the time
step is a fraction of the period.
````julia
evals, evecs = eigs(K_ff, M_ff; nev=1, which=:SM);
````
The fundamental angular frequency is then:
````julia
omega_f = real(sqrt(evals[1]));
````
We take the time step to be a fraction of the period of vibration in the
fundamental mode.
````julia
@show dt = 0.05 * 1/(omega_f/2/pi);
#
````
## Damping model
We take the damping to be representative of what's happening at the
fundamental vibration frequency.
For a given loss factor at a certain frequency $\omega_f$, the
stiffness-proportional damping coefficient may be estimated as
2*loss_tangent/$\omega_f$, and the mass-proportional damping coefficient may be
estimated as 2*loss_tangent*$\omega_f$.
````julia
Rayleigh_mass = (loss_tangent/2)*omega_f;
Rayleigh_stiffness = (loss_tangent/2)/omega_f;
````
Now we construct the Rayleigh damping matrix as a linear combination of the
stiffness and mass matrices.
````julia
C_ff = Rayleigh_stiffness * K_ff + Rayleigh_mass * M_ff
````
The time stepping loop is protected by `let end` to avoid unpleasant surprises
with variables getting clobbered by globals.
````julia
ts, corneruzs = let dt = dt, F_f = F_f
````
Initial displacement, velocity, and acceleration.
````julia
U0 = gathersysvec(u)
v = deepcopy(u)
V0 = gathersysvec(v)
U1 = fill(0.0, length(V0))
V1 = fill(0.0, length(V0))
F0 = deepcopy(F_f)
F1 = fill(0.0, length(F0))
R = fill(0.0, length(F0))
````
Factorize the dynamic stiffness
````julia
DSF = cholesky((M_ff + (dt/2)*C_ff + ((dt/2)^2)*K_ff))
````
The times and displacements of the corner will be collected into two vectors
````julia
ts = Float64[]
corneruzs = Float64[]
````
Let us begin the time integration loop:
````julia
t = 0.0;
step = 0;
F0 .= tmult(t) .* F_f
while t < tend
push!(ts, t)
push!(corneruzs, U0[cornerzdof])
t = t+dt;
step = step + 1;
(mod(step,100)==0) && println("Step $(step): $(t)")
````
Set the time-dependent load
````julia
F1 .= tmult(t) .* F_f
````
Compute the out of balance force.
````julia
R .= (M_ff*V0 - C_ff*(dt/2*V0) - K_ff*((dt/2)^2*V0 + dt*U0) + (dt/2)*(F0+F1));
````
Calculate the new velocities.
````julia
V1 = DSF\R;
````
Update the velocities.
````julia
U1 = U0 + (dt/2)*(V0+V1);
````
Switch the temporary vectors for the next step.
````julia
U0, U1 = U1, U0;
V0, V1 = V1, V0;
F0, F1 = F1, F0;
if (t == tend) # Are we done yet?
break;
end
if (t+dt > tend) # Adjust the last time step so that we exactly reach tend
dt = tend-t;
end
end
ts, corneruzs # return the collected results
end
#
````
## Plot the results
````julia
using Gnuplot
@gp "set terminal windows 0 " :-
@gp :- ts corneruzs./phun("mm") "lw 2 lc rgb 'red' with lines title 'Displacement of the corner' "
@gp :- "set xlabel 'Time [s]'"
@gp :- "set ylabel 'Displacement [mm]'"
````
The end.
````julia
true
````
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 4828 | # Tracking transient deformation of a cantilever beam: centered difference
Source code: [`bending_wave_Ray_expl_cd_tut.jl`](bending_wave_Ray_expl_cd_tut.jl)
## Description
A cantilever beam is given an initial velocity and then at time 0.0 it is
suddenly stopped by fixing one of its ends. This sends a wave down the beam.
The beam is modeled as a solid. Trapezoidal rule is used to integrate the
equations of motion in time. No damping is present.
## Goals
- Show how to create the discrete model for explicit dynamics.
- Demonstrate centered difference time stepping.
````julia
#
````
## Definitions
Basic imports.
````julia
using LinearAlgebra
using Arpack
````
This is the finite element toolkit itself.
````julia
using FinEtools
using FinEtools.AlgoBaseModule: matrix_blocked, vector_blocked
````
The linear stress analysis application is implemented in this package.
````julia
using FinEtoolsDeforLinear
using FinEtoolsDeforLinear.AlgoDeforLinearModule
````
Input parameters
````julia
E = 205000*phun("MPa");# Young's modulus
nu = 0.3;# Poisson ratio
rho = 7850*phun("KG*M^-3");# mass density
loss_tangent = 0.0001;
frequency = 1/0.0058;
Rayleigh_mass = 2*loss_tangent*(2*pi*frequency);
L = 200*phun("mm");
W = 4*phun("mm");
H = 8*phun("mm");
tolerance = W/500;
vmag = 0.1*phun("m")/phun("SEC");
tend = 0.013*phun("SEC");
#
````
## Create the discrete model
````julia
MR = DeforModelRed3D
fens,fes = H8block(L,W,H, 50,1,4)
geom = NodalField(fens.xyz)
u = NodalField(zeros(size(fens.xyz,1),3)) # displacement field
nl = selectnode(fens, box=[L L -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 1)
setebc!(u, nl, true, 2)
setebc!(u, nl, true, 3)
applyebc!(u)
numberdofs!(u)
corner = selectnode(fens, nearestto=[0 0 0])
cornerzdof = u.dofnums[corner[1], 3]
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,2)), material)
femm = associategeometry!(femm, geom)
K = stiffness(femm, geom, u)
````
Assemble the mass matrix as diagonal. The HRZ lumping technique is
applied through the assembler of the sparse matrix.
````julia
hrzass = SysmatAssemblerSparseHRZLumpingSymm(0.0)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, hrzass, geom, u)
````
Extract the free-free block of the matrices.
````julia
M_ff = matrix_blocked(M, nfreedofs(u))[:ff]
K_ff = matrix_blocked(K, nfreedofs(u))[:ff]
````
Form the damping matrix.
````julia
C_ff = Rayleigh_mass * M_ff
````
Figure out the highest frequency in the model, and use a time step that is
considerably larger than the period of that highest frequency.
````julia
evals, evecs = eigs(K_ff, M_ff; nev=1, which=:LM);
@show dt = 0.99 * 2/real(sqrt(evals[1]));
````
The time stepping loop is protected by `let end` to avoid unpleasant surprises
with variables getting clobbered by globals.
````julia
ts, corneruzs = let dt = dt
````
Initial displacement, velocity, and acceleration.
````julia
U0 = gathersysvec(u)
v = deepcopy(u)
v.values[:, 3] .= vmag
V0 = gathersysvec(v)
F1 = fill(0.0, length(V0))
U1 = fill(0.0, length(V0))
V1 = fill(0.0, length(V0))
A0 = fill(0.0, length(V0))
A1 = fill(0.0, length(V0))
````
The times and displacements of the corner will be collected into two vectors
````julia
ts = Float64[]
corneruzs = Float64[]
````
Let us begin the time integration loop:
````julia
t = 0.0;
step = 0;
while t < tend
push!(ts, t)
push!(corneruzs, U0[cornerzdof])
t = t+dt;
step = step + 1;
(mod(step,1000)==0) && println("Step $(step): $(t)")
````
Zero out the load
````julia
fill!(F1, 0.0);
````
Initial acceleration
````julia
if step == 1
A0 = M_ff \ (F1)
end
````
Update displacement.
````julia
@. U1 = U0 + dt*V0 + (dt^2/2)*A0;
````
Compute updated acceleration.
````julia
A1 .= M_ff \ (-K_ff*U1 + F1)
````
Update the velocities.
````julia
@. V1 = V0 + (dt/2)*(A0 + A1)
````
Switch the temporary vectors for the next step.
````julia
U0, U1 = U1, U0;
V0, V1 = V1, V0;
A0, A1 = A1, A0;
if (t == tend) # Are we done yet?
break;
end
if (t+dt > tend) # Adjust the last time step so that we exactly reach tend
dt = tend-t;
end
end
ts, corneruzs # return the collected results
end
#
````
## Plot the results
````julia
using Gnuplot
@gp "set terminal windows 4 " :-
@gp :- ts corneruzs./phun("mm") "lw 2 lc rgb 'red' with lines title 'Displacement of the corner' "
@gp :- "set xlabel 'Time [s]'"
@gp :- "set ylabel 'Displacement [mm]'"
````
The end.
````julia
true
````
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 5527 | # Tracking transient deformation of a cantilever beam: lumped mass
Source code: [`bending_wave_Ray_lumped_tut.jl`](bending_wave_Ray_lumped_tut.jl)
## Description
A cantilever beam is given an initial velocity and then at time 0.0 it is
suddenly stopped by fixing one of its ends. This sends a wave down the beam.
The beam is modeled as a solid. Trapezoidal rule is used to integrate the
equations of motion in time. Rayleigh mass-proportional damping is
incorporated. The dynamic stiffness is factorized for efficiency.
## Goals
- Show how to create the discrete model for implicit dynamics.
- Demonstrate trapezoidal-rule time stepping.
````julia
#
````
## Definitions
Basic imports.
````julia
using LinearAlgebra
using Arpack
````
This is the finite element toolkit itself.
````julia
using FinEtools
using FinEtools.AlgoBaseModule: matrix_blocked, vector_blocked
````
The linear stress analysis application is implemented in this package.
````julia
using FinEtoolsDeforLinear
using FinEtoolsDeforLinear.AlgoDeforLinearModule
````
Input parameters
````julia
E = 205000*phun("MPa");# Young's modulus
nu = 0.3;# Poisson ratio
rho = 7850*phun("KG*M^-3");# mass density
loss_tangent = 0.0001;
frequency = 1/0.0058;
Rayleigh_mass = 2*loss_tangent*(2*pi*frequency);
L = 200*phun("mm");
W = 4*phun("mm");
H = 8*phun("mm");
tolerance = W/500;
vmag = 0.1*phun("m")/phun("SEC");
tend = 0.013*phun("SEC");
#
````
## Create the discrete model
````julia
MR = DeforModelRed3D
fens,fes = H8block(L,W,H, 50,1,4)
geom = NodalField(fens.xyz)
u = NodalField(zeros(size(fens.xyz,1),3)) # displacement field
nl = selectnode(fens, box=[L L -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 1)
setebc!(u, nl, true, 2)
setebc!(u, nl, true, 3)
applyebc!(u)
numberdofs!(u)
corner = selectnode(fens, nearestto=[0 0 0])
cornerzdof = u.dofnums[corner[1], 3]
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,2)), material)
femm = associategeometry!(femm, geom)
K = stiffness(femm, geom, u)
````
Assemble the mass matrix as diagonal. The HRZ lumping technique is
applied through the assembler of the sparse matrix.
````julia
hrzass = SysmatAssemblerSparseHRZLumpingSymm(0.0)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, hrzass, geom, u)
````
Extract the free-free block of the matrices.
````julia
M_ff = matrix_blocked(M, nfreedofs(u))[:ff]
K_ff = matrix_blocked(K, nfreedofs(u))[:ff]
````
Check visually that the mass matrix is in fact diagonal. We use
the `findnz` function to retrieve the nonzeros in the matrix.
Each such entry is then plotted as a point.
````julia
using Gnuplot
using SparseArrays
I, J, V = findnz(M_ff)
@gp "set terminal windows 1 " :-
@gp :- J I "with p" :-
@gp :- "set xlabel 'Column'" "set xrange [1:$(size(M_ff, 2))] " :-
@gp :- "set ylabel 'Row'" "set yrange [$(size(M_ff, 1)):1] "
````
Find the relationship of the sum of all the elements of the
mass matrix and the total mass of the structure.
````julia
@show sum(sum(M_ff))
@show L*W*H*rho
````
Form the damping matrix.
````julia
C_ff = Rayleigh_mass * M_ff
````
Figure out the highest frequency in the model, and use a time step that is
considerably larger than the period of that highest frequency.
````julia
evals, evecs = eigs(K_ff, M_ff; nev=1, which=:LM);
@show dt = 350 * 2/real(sqrt(evals[1]));
````
The time stepping loop is protected by `let end` to avoid unpleasant surprises
with variables getting clobbered by globals.
````julia
ts, corneruzs = let dt = dt
````
Initial displacement, velocity, and acceleration.
````julia
U0 = gathersysvec(u)
v = deepcopy(u)
v.values[:, 3] .= vmag
V0 = gathersysvec(v)
F0 = fill(0.0, length(V0))
U1 = fill(0.0, length(V0))
V1 = fill(0.0, length(V0))
F1 = fill(0.0, length(V0))
R = fill(0.0, length(V0))
````
Factorize the dynamic stiffness
````julia
DSF = cholesky((M_ff + (dt/2)*C_ff + ((dt/2)^2)*K_ff))
````
The times and displacements of the corner will be collected into two vectors
````julia
ts = Float64[]
corneruzs = Float64[]
````
Let us begin the time integration loop:
````julia
t = 0.0;
step = 0;
while t < tend
push!(ts, t)
push!(corneruzs, U0[cornerzdof])
t = t+dt;
step = step + 1;
(mod(step,25)==0) && println("Step $(step): $(t)")
````
Zero out the load
````julia
fill!(F1, 0.0);
````
Compute the out of balance force.
````julia
R = (M_ff*V0 - C_ff*(dt/2*V0) - K_ff*((dt/2)^2*V0 + dt*U0) + (dt/2)*(F0+F1));
````
Calculate the new velocities.
````julia
V1 = DSF\R;
````
Update the velocities.
````julia
U1 = U0 + (dt/2)*(V0+V1);
````
Switch the temporary vectors for the next step.
````julia
U0, U1 = U1, U0;
V0, V1 = V1, V0;
F0, F1 = F1, F0;
if (t == tend) # Are we done yet?
break;
end
if (t+dt > tend) # Adjust the last time step so that we exactly reach tend
dt = tend-t;
end
end
ts, corneruzs # return the collected results
end
#
````
## Plot the results
````julia
using Gnuplot
@gp "set terminal windows 0 " :-
@gp :- ts corneruzs./phun("mm") "lw 2 lc rgb 'red' with lines title 'Displacement of the corner' "
@gp :- "set xlabel 'Time [s]'"
@gp :- "set ylabel 'Displacement [mm]'"
````
The end.
````julia
true
````
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 4964 | # Tracking transient deformation of a cantilever beam
Source code: [`bending_wave_Ray_tut.jl`](bending_wave_Ray_tut.jl)
## Description
A cantilever beam is given an initial velocity and then at time 0.0 it is
suddenly stopped by fixing one of its ends. This sends a wave down the beam.
The beam is modeled as a solid. Consistent mass matrix is used.
Trapezoidal rule is used to integrate the
equations of motion in time. Rayleigh mass-proportional damping is
incorporated. The dynamic stiffness is factorized for efficiency.
Deflection at the free end will look like this:

## Goals
- Show how to create the discrete model.
- Demonstrate trapezoidal-rule time stepping.
````julia
#
````
## Definitions
Basic imports.
````julia
using LinearAlgebra
using Arpack
````
This is the finite element toolkit itself.
````julia
using FinEtools
using FinEtools.AlgoBaseModule: matrix_blocked, vector_blocked
````
The linear stress analysis application is implemented in this package.
````julia
using FinEtoolsDeforLinear
using FinEtoolsDeforLinear.AlgoDeforLinearModule
````
Input parameters
````julia
E = 205000*phun("MPa");# Young's modulus
nu = 0.3;# Poisson ratio
rho = 7850*phun("KG*M^-3");# mass density
loss_tangent = 0.0001;
frequency = 1/0.0058;
Rayleigh_mass = 2*loss_tangent*(2*pi*frequency);
L = 200*phun("mm");
W = 4*phun("mm");
H = 8*phun("mm");
tolerance = W/500;
vmag = 0.1*phun("m")/phun("SEC");
tend = 0.013*phun("SEC");
#
````
## Create the discrete model
````julia
MR = DeforModelRed3D
fens,fes = H8block(L,W,H, 50,1,4)
geom = NodalField(fens.xyz)
u = NodalField(zeros(size(fens.xyz,1),3)) # displacement field
nl = selectnode(fens, box=[L L -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 1)
setebc!(u, nl, true, 2)
setebc!(u, nl, true, 3)
applyebc!(u)
numberdofs!(u)
corner = selectnode(fens, nearestto=[0 0 0])
cornerzdof = u.dofnums[corner[1], 3]
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,2)), material)
femm = associategeometry!(femm, geom)
K = stiffness(femm, geom, u)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, geom, u)
````
Extract the free-free block of the matrices.
````julia
M_ff = matrix_blocked(M, nfreedofs(u))[:ff]
K_ff = matrix_blocked(K, nfreedofs(u))[:ff]
````
Find the relationship of the sum of all the elements of the
mass matrix and the total mass of the structure.
````julia
@show sum(sum(M_ff))
@show L*W*H*rho
````
Form the damping matrix.
````julia
C_ff = Rayleigh_mass * M_ff
````
Figure out the highest frequency in the model, and use a time step that is
considerably larger than the period of that highest frequency.
````julia
evals, evecs = eigs(K_ff, M_ff; nev=1, which=:LM);
@show dt = 350 * 2/real(sqrt(evals[1]));
````
The time stepping loop is protected by `let end` to avoid unpleasant surprises
with variables getting clobbered by globals.
````julia
ts, corneruzs = let dt = dt
````
Initial displacement, velocity, and acceleration.
````julia
U0 = gathersysvec(u)
v = deepcopy(u)
v.values[:, 3] .= vmag
V0 = gathersysvec(v)
F0 = fill(0.0, length(V0))
U1 = fill(0.0, length(V0))
V1 = fill(0.0, length(V0))
F1 = fill(0.0, length(V0))
R = fill(0.0, length(V0))
````
Factorize the dynamic stiffness
````julia
DSF = cholesky((M_ff + (dt/2)*C_ff + ((dt/2)^2)*K_ff))
````
The times and displacements of the corner will be collected into two vectors
````julia
ts = Float64[]
corneruzs = Float64[]
````
Let us begin the time integration loop:
````julia
t = 0.0;
step = 0;
while t < tend
push!(ts, t)
push!(corneruzs, U0[cornerzdof])
t = t+dt;
step = step + 1;
(mod(step,25)==0) && println("Step $(step): $(t)")
````
Zero out the load
````julia
fill!(F1, 0.0);
````
Compute the out of balance force.
````julia
R = (M_ff*V0 - C_ff*(dt/2*V0) - K_ff*((dt/2)^2*V0 + dt*U0) + (dt/2)*(F0+F1));
````
Calculate the new velocities.
````julia
V1 = DSF\R;
````
Update the velocities.
````julia
U1 = U0 + (dt/2)*(V0+V1);
````
Switch the temporary vectors for the next step.
````julia
U0, U1 = U1, U0;
V0, V1 = V1, V0;
F0, F1 = F1, F0;
if (t == tend) # Are we done yet?
break;
end
if (t+dt > tend) # Adjust the last time step so that we exactly reach tend
dt = tend-t;
end
end
ts, corneruzs # return the collected results
end
#
````
## Plot the results
````julia
using Gnuplot
@gp "set terminal windows 0 " :-
@gp :- ts corneruzs./phun("mm") "lw 2 lc rgb 'red' with lines title 'Displacement of the corner' "
@gp :- "set xlabel 'Time [s]'"
@gp :- "set ylabel 'Displacement [mm]'"
````
The end.
````julia
true
````
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 1612 | # Tutorials for `FinEtoolsDeforLinear`
## Statics
- [Cook membrane, plane stress](Cook-plane-stress_tut.md) Well-known benchmark.
- [Composite benchmark R0031-3](R0031-3-Composite-benchmark_tut.md) NAFEMS sandwich plate benchmark.
- [Twisted beam, export to Abaqus](twisted_beam-export-to-abaqus_tut.md) Another well-known benchmark.
## Dynamics
- [Nearly incompressible cube vibration](unit_cube_modes_tut.md) Vibration of squishy cube.
- [Nearly incompressible cube vibration, alternative models](unit_cube_modes_alt_tut.md)
- [13H benchmark, forced vibration](TEST13H_tut.md) NAFEMS 13H beam forced-vibration benchmark
- [13H benchmark, forced vibration, modal model](TEST13H_mod_tut.md) NAFEMS 13H beam forced-vibration benchmark with a modal model.
- [13H benchmark, forced vibration, parallel execution](TEST13H_par_tut.md) NAFEMS 13H beam forced-vibration benchmark with a modal model. Parallel.
- [FV32 benchmark, free vibration of trapezoidal membrane](FV32_tut.md) NAFEMS plane-stress vibration benchmark.
- [Transient deformation of a cantilever beam](bending_wave_Ray_tut.md)
- [Transient deformation of a cantilever beam, lumped mass matrix](bending_wave_Ray_lumped_tut.md)
- [Transient deformation of a cantilever beam, centered difference](bending_wave_Ray_expl_cd_tut.md)
- [Beam under on/off loading: transient response](beam_load_on_off_tut.md)
- [Suddenly-stopped bar: CD explicit](sudden_stop_expl_cd_tut.md) Transient simulation with center difference integration.
- [Suddenly-stopped bar: TW explicit](sudden_stop_expl_tw_tut.md) Transient simulation with Tchamwa-Wielgosz integration.
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 4743 | # Suddenly-stopped bar: Centered difference explicit
Source code: [`sudden_stop_expl_cd_tut.jl`](sudden_stop_expl_cd_tut.jl)
## Description
A bar is given an initial velocity and then at time 0.0 it is
suddenly stopped by fixing one of its ends. This sends a wave down the bar.
The output of the simulation is the velocity, which tends to reproduce
the rectangular pulses in which the velocity bounces back and forth.
The beam is modeled as a solid. The classical centered difference
rule is used to integrate the
equations of motion in time. No damping is present.
## Goals
- Show how to create the discrete model for explicit dynamics.
- Demonstrate centered difference explicit time stepping.
````julia
#
````
## Definitions
Basic imports.
````julia
using LinearAlgebra
using Arpack
````
This is the finite element toolkit itself.
````julia
using FinEtools
using FinEtools.AlgoBaseModule: matrix_blocked, vector_blocked
````
The linear stress analysis application is implemented in this package.
````julia
using FinEtoolsDeforLinear
using FinEtoolsDeforLinear.AlgoDeforLinearModule
````
Input parameters
````julia
E = 205000*phun("MPa");# Young's modulus
nu = 0.3;# Poisson ratio
rho = 7850*phun("KG*M^-3");# mass density
L = 20*phun("mm");
W = 1*phun("mm");
H = 1*phun("mm");
tolerance = W/500;
vmag = 0.1*phun("m")/phun("SEC");
tend = 0.00005*phun("SEC");
#
````
## Create the discrete model
````julia
MR = DeforModelRed3D
fens,fes = H8block(L,W,H, 80,4,4)
geom = NodalField(fens.xyz)
u = NodalField(zeros(size(fens.xyz,1),3)) # displacement field
nl = selectnode(fens, box=[0 0 -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 1)
applyebc!(u)
numberdofs!(u)
corner = selectnode(fens, nearestto=[L 0 0])
cornerxdof = u.dofnums[corner[1], 1]
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,2)), material)
femm = associategeometry!(femm, geom)
K = stiffness(femm, geom, u)
````
Assemble the mass matrix as diagonal. The HRZ lumping technique is
applied through the assembler of the sparse matrix.
````julia
hrzass = SysmatAssemblerSparseHRZLumpingSymm(0.0)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, hrzass, geom, u)
````
Extract the free-free block of the matrices.
````julia
M_ff = matrix_blocked(M, nfreedofs(u))[:ff]
K_ff = matrix_blocked(K, nfreedofs(u))[:ff]
````
Figure out the highest frequency in the model, and use a time step that is
smaller than the period of the highest frequency.
````julia
evals, evecs = eigs(K_ff, M_ff; nev=1, which=:LM);
@show dt = 0.9 * 2/real(sqrt(evals[1]));
````
The time stepping loop is protected by `let end` to avoid unpleasant surprises
with variables getting clobbered by globals.
````julia
ts, cornervxs = let dt = dt
````
Initial displacement, velocity, and acceleration.
````julia
U0 = gathersysvec(u)
v = deepcopy(u)
v.values[:, 1] .= -vmag
V0 = gathersysvec(v)
F1 = fill(0.0, length(V0))
U1 = fill(0.0, length(V0))
V1 = fill(0.0, length(V0))
A0 = fill(0.0, length(V0))
A1 = fill(0.0, length(V0))
phi = 1.005
````
The times and displacements of the corner will be collected into two vectors
````julia
ts = Float64[]
cornervxs = Float64[]
````
Let us begin the time integration loop:
````julia
t = 0.0;
step = 0;
while t < tend
push!(ts, t)
push!(cornervxs, V0[cornerxdof])
t = t+dt;
step = step + 1;
(mod(step,1000)==0) && println("Step $(step): $(t)")
````
Zero out the load
````julia
fill!(F1, 0.0);
````
Initial acceleration
````julia
if step == 1
A0 = M_ff \ (F1)
end
````
Update displacement.
````julia
@. U1 = U0 + dt*V0 + (dt^2/2)*A0;
````
Compute updated acceleration.
````julia
A1 .= M_ff \ (-K_ff*U1 + F1)
````
Update the velocities.
````julia
@. V1 = V0 + (dt/2)*(A0 + A1)
````
Switch the temporary vectors for the next step.
````julia
U0, U1 = U1, U0;
V0, V1 = V1, V0;
A0, A1 = A1, A0;
if (t == tend) # Are we done yet?
break;
end
if (t+dt > tend) # Adjust the last time step so that we exactly reach tend
dt = tend-t;
end
end
ts, cornervxs # return the collected results
end
#
````
## Plot the results
````julia
using Gnuplot
@gp "set terminal windows 7 " :-
@gp :- ts cornervxs./phun("mm") "lw 2 lc rgb 'red' with lines title 'Displacement of the corner' "
@gp :- "set xlabel 'Time [s]'"
@gp :- "set ylabel 'Velocity [mm/s]'"
````
The end.
````julia
true
````
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 4764 | # Suddenly-stopped bar: TW explicit
Source code: [`sudden_stop_expl_tw_tut.jl`](sudden_stop_expl_tw_tut.jl)
## Description
A bar is given an initial velocity and then at time 0.0 it is
suddenly stopped by fixing one of its ends. This sends a wave down the bar.
The output of the simulation is the velocity, which tends to reproduce
the rectangular pulses in which the velocity bounces back and forth.
The beam is modeled as a solid. The Tchamwa-Wielgosz explicit rule
is used to integrate the equations of motion in time. No damping is present.
## Goals
- Show how to create the discrete model for explicit dynamics.
- Demonstrate Tchamwa-Wielgosz explicit time stepping.
````julia
#
````
## Definitions
````julia
tst = time()
````
Basic imports.
````julia
using LinearAlgebra
using Arpack
````
This is the finite element toolkit itself.
````julia
using FinEtools
using FinEtools.AlgoBaseModule: matrix_blocked, vector_blocked
````
The linear stress analysis application is implemented in this package.
````julia
using FinEtoolsDeforLinear
using FinEtoolsDeforLinear.AlgoDeforLinearModule
````
Input parameters
````julia
E = 205000*phun("MPa");# Young's modulus
nu = 0.3;# Poisson ratio
rho = 7850*phun("KG*M^-3");# mass density
L = 20*phun("mm");
W = 1*phun("mm");
H = 1*phun("mm");
tolerance = W/500;
vmag = 0.1*phun("m")/phun("SEC");
tend = 0.00005*phun("SEC");
#
````
## Create the discrete model
````julia
MR = DeforModelRed3D
fens,fes = H8block(L,W,H, 80,4,4)
geom = NodalField(fens.xyz)
u = NodalField(zeros(size(fens.xyz,1),3)) # displacement field
nl = selectnode(fens, box=[0 0 -Inf Inf -Inf Inf], inflate=tolerance)
setebc!(u, nl, true, 1)
applyebc!(u)
numberdofs!(u)
corner = selectnode(fens, nearestto=[L 0 0])
cornerxdof = u.dofnums[corner[1], 1]
material = MatDeforElastIso(MR, rho, E, nu, 0.0)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,2)), material)
femm = associategeometry!(femm, geom)
@time K = stiffness(femm, geom, u)
````
Assemble the mass matrix as diagonal. The HRZ lumping technique is
applied through the assembler of the sparse matrix.
````julia
hrzass = SysmatAssemblerSparseHRZLumpingSymm(0.0)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, hrzass, geom, u)
````
Extract the free-free block of the matrices.
````julia
M_ff = matrix_blocked(M, nfreedofs(u))[:ff]
K_ff = matrix_blocked(K, nfreedofs(u))[:ff]
````
Figure out the highest frequency in the model, and use a time step that is
smaller than the period of the highest frequency.
````julia
evals, evecs = eigs(K_ff, M_ff; nev=1, which=:LM);
@show dt = 0.9 * 2/real(sqrt(evals[1]));
````
The time stepping loop is protected by `let end` to avoid unpleasant surprises
with variables getting clobbered by globals.
````julia
ts, cornervxs = let dt = dt
````
Initial displacement, velocity, and acceleration.
````julia
U0 = gathersysvec(u)
v = deepcopy(u)
v.values[:, 1] .= -vmag
V0 = gathersysvec(v)
F1 = fill(0.0, length(V0))
U1 = fill(0.0, length(V0))
V1 = fill(0.0, length(V0))
A0 = fill(0.0, length(V0))
A1 = fill(0.0, length(V0))
phi = 1.05
````
The times and displacements of the corner will be collected into two vectors
````julia
ts = Float64[]
cornervxs = Float64[]
````
Let us begin the time integration loop:
````julia
t = 0.0;
step = 0;
while t < tend
push!(ts, t)
push!(cornervxs, V0[cornerxdof])
t = t+dt;
step = step + 1;
(mod(step,1000)==0) && println("Step $(step): $(t)")
````
Zero out the load
````julia
fill!(F1, 0.0);
````
Initial acceleration
````julia
if step == 1
A0 = M_ff \ (F1)
end
````
Update displacement.
````julia
@. U1 = U0 + dt*V0 + (phi*dt^2)*A0;
````
Update the velocities.
````julia
@. V1 = V0 + dt*A0
````
Compute updated acceleration.
````julia
A1 .= M_ff \ (-K_ff*U1 + F1)
````
Switch the temporary vectors for the next step.
````julia
U0, U1 = U1, U0;
V0, V1 = V1, V0;
A0, A1 = A1, A0;
if (t == tend) # Are we done yet?
break;
end
if (t+dt > tend) # Adjust the last time step so that we exactly reach tend
dt = tend-t;
end
end
ts, cornervxs # return the collected results
end
#
````
## Plot the results
````julia
using Gnuplot
@gp "set terminal windows 5 " :-
@gp :- ts cornervxs./phun("mm") "lw 2 lc rgb 'red' with lines title 'Displacement of the corner' "
@gp :- "set xlabel 'Time [s]'"
@gp :- "set ylabel 'Velocity [mm/s]'"
@show time()-tst
````
The end.
````julia
true
````
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 6093 | # Twisted beam: Export solid model to Abaqus
Source code: [`twisted_beam-export-to-abaqus_tut.jl`](twisted_beam-export-to-abaqus_tut.jl)
## Description
In this example we show how to export a model to the finite element software Abaqus.
The model is solved also in the example `twisted_beam_algo.jl`. Here we export the model for execution in Abaqus.
The task begins with defining the input parameters, creating the mesh, identifying the nodes to which essential boundary conditions are to be applied, and extracting from the boundary the surface finite elements to which the traction loading at the end of the beam is to be applied.
This is the finite element toolkit itself.
````julia
using FinEtools
````
The linear stress analysis application is implemented in this package.
````julia
using FinEtoolsDeforLinear
````
We will also use specifically some functions from these modules.
````julia
using FinEtoolsDeforLinear.AlgoDeforLinearModule
using FinEtools.MeshExportModule
````
Define some parameters, in consistent units.
````julia
E = 0.29e8;
nu = 0.22;
W = 1.1;
L = 12.;
t = 0.32;
nl = 2; nt = 1; nw = 1; ref = 2;
p = 1/W/t;
````
Loading in the Z direction. Reference (publication by Harder): 5.424e-3.
````julia
loadv = [0;0;p]; dir = 3; uex = 0.005424534868469;
````
Loading in the Y direction. Reference (Harder): 1.754e-3. And comment the line below to obtain a solution in the other direction.
````julia
#loadv = [0;p;0]; dir = 2; uex = 0.001753248285256;
tolerance = t/1000;
fens,fes = H8block(L,W,t, nl*ref,nw*ref,nt*ref)
````
Reshape the rectangular block into a twisted beam shape.
````julia
for i = 1:count(fens)
let
a = fens.xyz[i,1]/L*(pi/2); y = fens.xyz[i,2]-(W/2); z = fens.xyz[i,3]-(t/2);
fens.xyz[i,:] = [fens.xyz[i,1],y*cos(a)-z*sin(a),y*sin(a)+z*cos(a)];
end
end
````
Clamped face of the beam: select all the nodes in this cross-section.
````julia
l1 = selectnode(fens; box = [0 0 -100*W 100*W -100*W 100*W], inflate = tolerance)
````
Traction on the opposite face
````julia
boundaryfes = meshboundary(fes);
Toplist = selectelem(fens,boundaryfes, box = [L L -100*W 100*W -100*W 100*W], inflate = tolerance);
````
The tutorial proper begins here. We create the Abaqus exporter and start writing the .inp file.
````julia
AE = AbaqusExporter("twisted_beam");
HEADING(AE, "Twisted beam example");
````
The part definition is trivial: all will be defined rather for the instance of the part.
````julia
PART(AE, "part1");
END_PART(AE);
````
The assembly will consist of a single instance (of the empty part defined above). The node set will be defined for the instance itself.
````julia
ASSEMBLY(AE, "ASSEM1");
INSTANCE(AE, "INSTNC1", "PART1");
NODE(AE, fens.xyz);
````
We export the finite elements themselves. Note that the elements need to have distinct numbers. We start numbering the hexahedra at 1. The definition of the element creates simultaneously an element set which is used below in the section assignment (and the definition of the load).
````julia
ELEMENT(AE, "c3d8rh", "AllElements", 1, connasarray(fes))
````
The traction is applied to surface elements. Because the elements in the Abaqus model need to have unique numbers, we need to start from an integer which is the number of the solid elements plus one.
````julia
ELEMENT(AE, "SFM3D4", "TractionElements", 1+count(fes), connasarray(subset(boundaryfes,Toplist)))
````
The nodes in the clamped cross-section are going to be grouped in the node set `l1`.
````julia
NSET_NSET(AE, "l1", l1)
````
We define a coordinate system (orientation of the material coordinate system), in this example it is the global Cartesian coordinate system. The sections are defined for the solid elements of the interior and the surface elements to which the traction is applied, and the assignment to the elements is by element set (`AllElements` and `TractionElements`). Note that for the solid section we also define reference to hourglass control named `Hourglassctl`.
````julia
ORIENTATION(AE, "GlobalOrientation", vec([1. 0 0]), vec([0 1. 0]));
SOLID_SECTION(AE, "elasticity", "GlobalOrientation", "AllElements", "Hourglassctl");
SURFACE_SECTION(AE, "TractionElements")
````
This concludes the definition of the instance and of the assembly.
````julia
END_INSTANCE(AE);
END_ASSEMBLY(AE);
````
This is the definition of the isotropic elastic material.
````julia
MATERIAL(AE, "elasticity")
ELASTIC(AE, E, nu)
````
The element properties for the interior hexahedra are controlled by the section-control. In this case we are selecting enhanced hourglass stabilization (much preferable to the default stiffness stabilization).
````julia
SECTION_CONTROLS(AE, "Hourglassctl", "HOURGLASS=ENHANCED")
````
The static perturbation analysis step is defined next.
````julia
STEP_PERTURBATION_STATIC(AE)
````
The boundary conditions are applied directly to the node set `l1`. Since the node set is defined for the instance, we need to refer to it by the qualified name `ASSEM1.INSTNC1.l1`.
````julia
BOUNDARY(AE, "ASSEM1.INSTNC1.l1", 1)
BOUNDARY(AE, "ASSEM1.INSTNC1.l1", 2)
BOUNDARY(AE, "ASSEM1.INSTNC1.l1", 3)
````
The traction is applied to the surface quadrilateral elements exported above.
````julia
DLOAD(AE, "ASSEM1.INSTNC1.TractionElements", vec(loadv))
````
Now we have defined the analysis step and the definition of the model can be concluded.
````julia
END_STEP(AE)
close(AE)
````
As quick check, here is the contents of the exported model file:
````julia
@show readlines("twisted_beam.inp")
````
What remains is to load the model into Abaqus and execute it as a job. Alternatively Abaqus can be called on the input file to carry out the analysis at the command line as
```
abaqus job=twisted_beam.inp
```
The output database `twisted_beam.odb` can then be loaded for postprocessing, for instance from the command line as
```
abaqus viewer database=twisted_beam.odb
```
````julia
nothing
````
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 3863 | # Vibration of a cube of nearly incompressible material: alternative models
Source code: [`unit_cube_modes_alt_tut.jl`](unit_cube_modes_alt_tut.jl)
## Description
Compute the free-vibration spectrum of a unit cube of nearly
incompressible isotropic material, E = 1, ν = 0.499, and ρ = 1 (refer to [1]).
Here we show how alternative finite element models compare:
The solution with the serendipity quadratic hexahedron is supplemented with
solutions obtained with advanced finite elements: nodal-integration energy
stabilized hexahedra and tetrahedra, and mean-strain hexahedra and
tetrahedra.
## References
[1] Puso MA, Solberg J (2006) A stabilized nodally integrated tetrahedral. International Journal for Numerical Methods in Engineering 67: 841-867.
[2] P. Krysl, Mean-strain 8-node hexahedron with optimized energy-sampling
stabilization, Finite Elements in Analysis and Design 108 (2016) 41–53.

## Goals
- Show how to set up a simulation loop that will run all the models and collect data.
- Show how to present the computed spectrum curves.
````julia
#
````
## Definitions
This is the finite element toolkit itself.
````julia
using FinEtools
````
The linear stress analysis application is implemented in this package.
````julia
using FinEtoolsDeforLinear
````
Convenience import.
````julia
using FinEtools.MeshExportModule
````
The eigenvalue problem is solved with the Lanczos algorithm from this package.
````julia
using Arpack
using SymRCM
````
The material properties and dimensions are defined with physical units.
````julia
E = 1*phun("PA");
nu = 0.499;
rho = 1*phun("KG/M^3");
a = 1*phun("M"); # length of the side of the cube
N = 8
neigvs = 20 # how many eigenvalues
OmegaShift = (0.01*2*pi)^2; # The frequency with which to shift
````
The model is fully three-dimensional, and hence the material model and the
FEMM created below need to refer to an appropriate model-reduction scheme.
````julia
MR = DeforModelRed3D
material = MatDeforElastIso(MR, rho, E, nu, 0.0);
````
````julia
models = [
("H20", H20block, GaussRule(3,2), FEMMDeforLinear, 1),
("ESNICEH8", H8block, NodalTensorProductRule(3), FEMMDeforLinearESNICEH8, 2),
("ESNICET4", T4block, NodalSimplexRule(3), FEMMDeforLinearESNICET4, 2),
("MSH8", H8block, NodalTensorProductRule(3), FEMMDeforLinearMSH8, 2),
("MST10", T10block, TetRule(4), FEMMDeforLinearMST10, 1),
]
````
Run the simulation loop over all the models.
````julia
sigdig(n) = round(n * 10000) / 10000
results = let
results = []
for m in models
fens, fes = m[2](a, a, a, m[5]*N, m[5]*N, m[5]*N);
@show count(fens)
geom = NodalField(fens.xyz)
u = NodalField(zeros(size(fens.xyz,1),3))
numbering = let
C = connectionmatrix(FEMMBase(IntegDomain(fes, m[3])), count(fens))
numbering = symrcm(C)
end
numberdofs!(u, numbering);
println("nfreedofs = $(nfreedofs(u))")
femm = m[4](MR, IntegDomain(fes, m[3]), material);
femm = associategeometry!(femm, geom)
K = stiffness(femm, geom, u)
M = mass(femm, geom, u);
evals, evecs, nconv = eigs(K+OmegaShift*M, M; nev=neigvs, which=:SM)
@show nconv == neigvs
evals = evals .- OmegaShift;
fs = real(sqrt.(complex(evals)))/(2*pi)
println("$(m[1]) eigenvalues: $(sigdig.(fs)) [Hz]")
push!(results, (m, fs))
end
results # return it
end
#
````
## Present the results graphically
````julia
using Gnuplot
@gp "set terminal windows 0 " :-
for r in results
@gp :- collect(1:length(r[2])) vec(r[2]) " lw 2 with lp title '$(r[1][1])' " :-
end
@gp :- "set xlabel 'Mode number [ND]'" :-
@gp :- "set ylabel 'Frequency [Hz]'"
````
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 3.0.4 | c0a87e41e018a0cb93b6670b7f0e40e6615e5cf8 | docs | 5992 | # Vibration of a cube of nearly incompressible material
Source code: [`unit_cube_modes_tut.jl`](unit_cube_modes_tut.jl)
## Description
Compute the free-vibration spectrum of a unit cube of nearly
incompressible isotropic material, E = 1, ν = 0.499, and ρ = 1 (refer to [1]).
The solution with the `FinEtools` package is compared with a commercial
software solution, and hence we also export the model to Abaqus.
## References
[1] Puso MA, Solberg J (2006) A stabilized nodally integrated tetrahedral.
International Journal for Numerical Methods in Engineering 67: 841-867.
[2] P. Krysl, Mean-strain 8-node hexahedron with optimized energy-sampling
stabilization, Finite Elements in Analysis and Design 108 (2016) 41–53.

## Goals
- Show how to generate simple mesh.
- Show how to set up the discrete model for a free vibration problem.
- Show how to export the model to Abaqus.
````julia
#
````
## Definitions
This is the finite element toolkit itself.
````julia
using FinEtools
````
The linear stress analysis application is implemented in this package.
````julia
using FinEtoolsDeforLinear
````
Convenience import.
````julia
using FinEtools.MeshExportModule
````
The eigenvalue problem is solved with the Lanczos algorithm from this package.
````julia
using Arpack
````
The material properties and dimensions are defined with physical units.
````julia
E = 1*phun("PA");
nu = 0.499;
rho = 1*phun("KG/M^3");
a = 1*phun("M"); # length of the side of the cube
````
We generate a mesh of 5 x 5 x 5 serendipity 20-node hexahedral elements in a
regular grid.
````julia
fens, fes = H20block(a, a, a, 5, 5, 5);
````
The problem is solved in three dimensions and hence we create the displacement
field as three-dimensional with three displacement components per node. The
degrees of freedom are then numbered (note that no essential boundary
conditions are applied, since the cube is free-floating).
````julia
geom = NodalField(fens.xyz)
u = NodalField(zeros(size(fens.xyz,1),3)) # displacement field
numberdofs!(u);
````
The model is fully three-dimensional, and hence the material model and the
FEMM created below need to refer to an appropriate model-reduction scheme.
````julia
MR = DeforModelRed3D
material = MatDeforElastIso(MR, rho, E, nu, 0.0);
````
Note that we compute the stiffness and the mass matrix using different FEMMs.
The difference is only the quadrature rule chosen: in order to make the mass
matrix non-singular, an accurate Gauss rule needs to be used, whereas for
the stiffness matrix we want to avoid the excessive stiffness and therefore
the reduced Gauss rule is used.
````julia
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,2)), material);
K = stiffness(femm, geom, u)
femm = FEMMDeforLinear(MR, IntegDomain(fes, GaussRule(3,3)), material)
M = mass(femm, geom, u);
````
The free vibration problem can now be solved. In order for the eigenvalue
solver to work well, we apply mass-shifting (otherwise the first matrix
given to the solver – stiffness – would be singular). We specify the number
of eigenvalues to solve for, and we guess the frequency with which to shift
as 0.01 Hz.
````julia
neigvs = 20 # how many eigenvalues
OmegaShift = (0.01*2*pi)^2; # The frequency with which to shift
````
The `eigs` routine can now be invoked to solve for a given number of
frequencies from the smallest-magnitude end of the spectrum. Note that the
mass shifting needs to be undone when the solution is obtained.
````julia
evals, evecs, nconv = eigs(K+OmegaShift*M, M; nev=neigvs, which=:SM)
@show nconv == neigvs
evals = evals .- OmegaShift;
fs = real(sqrt.(complex(evals)))/(2*pi)
sigdig(n) = round(n * 10000) / 10000
println("Eigenvalues: $(sigdig.(fs)) [Hz]")
````
The first nonzero frequency, frequency 7, should be around .263 Hz.
The computed mode can be visualized in Paraview. Use the "Animation view" to
produce moving pictures for the mode.
````julia
mode = 7
scattersysvec!(u, evecs[:,mode])
File = "unit_cube_modes.vtk"
vtkexportmesh(File, fens, fes; vectors=[("mode$mode", u.values)])
@async run(`"paraview.exe" $File`);
````
Finally we export the model to Abaqus. Note that we specify the mass
density property (necessary for dynamics).
````julia
AE = AbaqusExporter("unit_cube_modes_h20");
HEADING(AE, "Vibration modes of unit cube of almost incompressible material.");
COMMENT(AE, "The first six frequencies are rigid body modes.");
COMMENT(AE, "The first nonzero frequency (7) should be around 0.26 Hz");
PART(AE, "part1");
END_PART(AE);
ASSEMBLY(AE, "ASSEM1");
INSTANCE(AE, "INSTNC1", "PART1");
NODE(AE, fens.xyz);
COMMENT(AE, "The hybrid form of the serendipity hexahedron is chosen because");
COMMENT(AE, "the material is nearly incompressible.");
ELEMENT(AE, "C3D20RH", "AllElements", 1, connasarray(fes))
ORIENTATION(AE, "GlobalOrientation", vec([1. 0 0]), vec([0 1. 0]));
SOLID_SECTION(AE, "elasticity", "GlobalOrientation", "AllElements");
END_INSTANCE(AE);
END_ASSEMBLY(AE);
MATERIAL(AE, "elasticity")
ELASTIC(AE, E, nu)
DENSITY(AE, rho)
STEP_FREQUENCY(AE, neigvs)
END_STEP(AE)
close(AE)
````
It remains is to load the model into Abaqus and execute it as a job.
Alternatively Abaqus can be called on the input file to carry out the
analysis at the command line as
```
abaqus job=unit_cube_modes_h20.inp
```
The output database `unit_cube_modes_h20.odb` can then be loaded for
postprocessing, for instance from the command line as
```
abaqus viewer database=unit_cube_modes_h20.odb
```
Don't forget to compare the computed frequencies and the mode shapes. For
instance, the first six frequencies should be nearly 0, and the seventh
frequency should be approximately 0.262 Hz. There may be very minor
differences due to the fact that the FinEtools formulation is purely
displacement-based, whereas the Abaqus model is hybrid (displacement plus
pressure).
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| FinEtoolsDeforLinear | https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl.git |
|
[
"MIT"
] | 0.1.0 | 755459c8c724437026ade8eed1032572a5236752 | code | 4974 | using SLEEF
using BenchmarkTools
using JLD, DataStructures
using Printf
const RETUNE = false
const VERBOSE = true
const DETAILS = false
const test_types = (Float64, Float32) # Which types do you want to bench?
const bench = ("Base", "SLEEF")
const suite = BenchmarkGroup()
for n in bench
suite[n] = BenchmarkGroup([n])
end
bench_reduce(f::Function, X) = mapreduce(x -> reinterpret(Unsigned,x), |, f(x) for x in X)
using Base.Math.IEEEFloat
MRANGE(::Type{Float64}) = 10000000
MRANGE(::Type{Float32}) = 10000
IntF(::Type{Float64}) = Int64
IntF(::Type{Float32}) = Int32
x_trig(::Type{T}) where {T<:IEEEFloat} = begin
x_trig = T[]
for i = 1:10000
s = reinterpret(T, reinterpret(IntF(T), T(pi)/4 * i) - IntF(T)(20))
e = reinterpret(T, reinterpret(IntF(T), T(pi)/4 * i) + IntF(T)(20))
d = s
while d <= e
append!(x_trig, d)
d = reinterpret(T, reinterpret(IntF(T), d) + IntF(T)(1))
end
end
x_trig = append!(x_trig, -10:0.0002:10)
x_trig = append!(x_trig, -MRANGE(T):200.1:MRANGE(T))
end
x_exp(::Type{T}) where {T<:IEEEFloat} = map(T, vcat(-10:0.0002:10, -1000:0.1:1000))
x_exp2(::Type{T}) where {T<:IEEEFloat} = map(T, vcat(-10:0.0002:10, -120:0.023:1000, -1000:0.02:2000))
x_exp10(::Type{T}) where {T<:IEEEFloat} = map(T, vcat(-10:0.0002:10, -35:0.023:1000, -300:0.01:300))
x_expm1(::Type{T}) where {T<:IEEEFloat} = map(T, vcat(-10:0.0002:10, -1000:0.021:1000, -1000:0.023:1000, 10.0.^-(0:0.02:300), -10.0.^-(0:0.02:300), 10.0.^(0:0.021:300), -10.0.^-(0:0.021:300)))
x_log(::Type{T}) where {T<:IEEEFloat} = map(T, vcat(0.0001:0.0001:10, 0.001:0.1:10000, 1.1.^(-1000:1000), 2.1.^(-1000:1000)))
x_log10(::Type{T}) where {T<:IEEEFloat} = map(T, vcat(0.0001:0.0001:10, 0.0001:0.1:10000))
x_log1p(::Type{T}) where {T<:IEEEFloat} = map(T, vcat(0.0001:0.0001:10, 0.0001:0.1:10000, 10.0.^-(0:0.02:300), -10.0.^-(0:0.02:300)))
x_atrig(::Type{T}) where {T<:IEEEFloat} = map(T, vcat(-1:0.00002:1))
x_atan(::Type{T}) where {T<:IEEEFloat} = map(T, vcat(-10:0.0002:10, -10000:0.2:10000, -10000:0.201:10000))
x_cbrt(::Type{T}) where {T<:IEEEFloat} = map(T, vcat(-10000:0.2:10000, 1.1.^(-1000:1000), 2.1.^(-1000:1000)))
x_trigh(::Type{T}) where {T<:IEEEFloat} = map(T, vcat(-10:0.0002:10, -1000:0.02:1000))
x_asinhatanh(::Type{T}) where {T<:IEEEFloat} = map(T, vcat(-10:0.0002:10, -1000:0.02:1000))
x_acosh(::Type{T}) where {T<:IEEEFloat} = map(T, vcat(1:0.0002:10, 1:0.02:1000))
x_pow(::Type{T}) where {T<:IEEEFloat} = begin
xx1 = map(Tuple{T,T}, [(x,y) for x = -100:0.20:100, y = 0.1:0.20:100])[:]
xx2 = map(Tuple{T,T}, [(x,y) for x = -100:0.21:100, y = 0.1:0.22:100])[:]
xx3 = map(Tuple{T,T}, [(x,y) for x = 2.1, y = -1000:0.1:1000])
xx = vcat(xx1, xx2, xx2)
end
import Base.atanh
for f in (:atanh,)
@eval begin
($f)(x::Float64) = ccall($(string(f)), Float64, (Float64,), x)
($f)(x::Float32) = ccall($(string(f,"f")), Float32, (Float32,), x)
end
end
const micros = OrderedDict(
"sin" => x_trig,
"cos" => x_trig,
"tan" => x_trig,
"asin" => x_atrig,
"acos" => x_atrig,
"atan" => x_atan,
"exp" => x_exp,
"exp2" => x_exp2,
"exp10" => x_exp10,
"expm1" => x_expm1,
"log" => x_log,
"log2" => x_log10,
"log10" => x_log10,
"log1p" => x_log1p,
"sinh" => x_trigh,
"cosh" => x_trigh,
"tanh" => x_trigh,
"asinh" => x_asinhatanh,
"acosh" => x_acosh,
"atanh" => x_asinhatanh,
"cbrt" => x_cbrt
)
for n in bench
for (f,x) in micros
suite[n][f] = BenchmarkGroup([f])
for T in test_types
fex = Expr(:., Symbol(n), QuoteNode(Symbol(f)))
suite[n][f][string(T)] = @benchmarkable bench_reduce($fex, $(x(T)))
end
end
end
tune_params = joinpath(@__DIR__, "params.jld")
if !isfile(tune_params) || RETUNE
tune!(suite; verbose=VERBOSE, seconds = 2)
save(tune_params, "suite", params(suite))
println("Saving tuned parameters.")
else
println("Loading pretuned parameters.")
loadparams!(suite, load(tune_params, "suite"), :evals, :samples)
end
println("Running micro benchmarks...")
results = run(suite; verbose=VERBOSE, seconds = 2)
printstyled("Benchmarks: median ratio SLEEF/Base\n", color = :blue)
for f in keys(micros)
printstyled(string(f) color = :magenta)
for T in test_types
println()
print("time: ", )
tratio = ratio(median(results["SLEEF"][f][string(T)]), median(results["Base"][f][string(T)])).time
tcolor = tratio > 3 ? :red : tratio < 1.5 ? :green : :blue
printstyled(@sprintf("%.2f",tratio), " ", string(T), color = tcolor)
if DETAILS
printstyled("details SLEEF/Base\n", color=:blue)
println(results["SLEEF"][f][string(T)])
println(results["Base"][f][string(T)])
println()
end
end
println("\n")
end
| SLEEFInline | https://github.com/AStupidBear/SLEEFInline.jl.git |
|
[
"MIT"
] | 0.1.0 | 755459c8c724437026ade8eed1032572a5236752 | code | 5487 | module SLEEFInline
# export sin, cos, tan, asin, acos, atan, sincos, sinh, cosh, tanh,
# asinh, acosh, atanh, log, log2, log10, log1p, ilogb, exp, exp2, exp10, expm1, ldexp, cbrt, pow
# fast variants (within 3 ulp)
# export sin_fast, cos_fast, tan_fast, sincos_fast, asin_fast, acos_fast, atan_fast, atan2_fast, log_fast, cbrt_fast
using Base.Math: uinttype, @horner, exponent_bias, exponent_mask, significand_bits, IEEEFloat, exponent_raw_max
const SLEEF = SLEEFInline
export SLEEF
## constants
const MLN2 = 6.931471805599453094172321214581765680755001343602552541206800094933936219696955e-01 # log(2)
const MLN2E = 1.442695040888963407359924681001892137426645954152985934135449406931 # log2(e)
const M_PI = 3.141592653589793238462643383279502884 # pi
const PI_2 = 1.570796326794896619231321691639751442098584699687552910487472296153908203143099 # pi/2
const PI_4 = 7.853981633974483096156608458198757210492923498437764552437361480769541015715495e-01 # pi/4
const M_1_PI = 0.318309886183790671537767526745028724 # 1/pi
const M_2_PI = 0.636619772367581343075535053490057448 # 2/pi
const M_4_PI = 1.273239544735162686151070106980114896275677165923651589981338752471174381073817 # 4/pi
const MSQRT2 = 1.414213562373095048801688724209698078569671875376948073176679737990732478462102 # sqrt(2)
const M1SQRT2 = 7.071067811865475244008443621048490392848359376884740365883398689953662392310596e-01 # 1/sqrt(2)
const M2P13 = 1.259921049894873164767210607278228350570251464701507980081975112155299676513956 # 2^1/3
const M2P23 = 1.587401051968199474751705639272308260391493327899853009808285761825216505624206 # 2^2/3
const MLOG10_2 = 3.3219280948873623478703194294893901758648313930
const MDLN10E(::Type{Float64}) = Double(0.4342944819032518, 1.098319650216765e-17) # log10(e)
const MDLN10E(::Type{Float32}) = Double(0.4342945f0, -1.010305f-8)
const MDLN2E(::Type{Float64}) = Double(1.4426950408889634, 2.0355273740931033e-17) # log2(e)
const MDLN2E(::Type{Float32}) = Double(1.442695f0, 1.925963f-8)
const MDLN2(::Type{Float64}) = Double(0.693147180559945286226764, 2.319046813846299558417771e-17) # log(2)
const MDLN2(::Type{Float32}) = Double(0.69314718246459960938f0, -1.904654323148236017f-9)
const MDPI(::Type{Float64}) = Double(3.141592653589793, 1.2246467991473532e-16) # pi
const MDPI(::Type{Float32}) = Double(3.1415927f0, -8.742278f-8)
const MDPI2(::Type{Float64}) = Double(1.5707963267948966, 6.123233995736766e-17) # pi/2
const MDPI2(::Type{Float32}) = Double(1.5707964f0, -4.371139f-8)
const MD2P13(::Type{Float64}) = Double(1.2599210498948732, -2.589933375300507e-17) # 2^1/3
const MD2P13(::Type{Float32}) = Double(1.2599211f0, -2.4018702f-8)
const MD2P23(::Type{Float64}) = Double(1.5874010519681996, -1.0869008194197823e-16) # 2^2/3
const MD2P23(::Type{Float32}) = Double(1.587401f0, 1.9520385f-8)
# Split pi into four parts (each is 26 bits)
const PI_A(::Type{Float64}) = 3.1415926218032836914
const PI_B(::Type{Float64}) = 3.1786509424591713469e-08
const PI_C(::Type{Float64}) = 1.2246467864107188502e-16
const PI_D(::Type{Float64}) = 1.2736634327021899816e-24
const PI_A(::Type{Float32}) = 3.140625f0
const PI_B(::Type{Float32}) = 0.0009670257568359375f0
const PI_C(::Type{Float32}) = 6.2771141529083251953f-7
const PI_D(::Type{Float32}) = 1.2154201256553420762f-10
const PI_XD(::Type{Float32}) = 1.2141754268668591976f-10
const PI_XE(::Type{Float32}) = 1.2446743939339977025f-13
# split 2/pi into upper and lower parts
const M_2_PI_H = 0.63661977236758138243
const M_2_PI_L = -3.9357353350364971764e-17
# Split log(10) into upper and lower parts
const L10U(::Type{Float64}) = 0.30102999566383914498
const L10L(::Type{Float64}) = 1.4205023227266099418e-13
const L10U(::Type{Float32}) = 0.3010253906f0
const L10L(::Type{Float32}) = 4.605038981f-6
# Split log(2) into upper and lower parts
const L2U(::Type{Float64}) = 0.69314718055966295651160180568695068359375
const L2L(::Type{Float64}) = 0.28235290563031577122588448175013436025525412068e-12
const L2U(::Type{Float32}) = 0.693145751953125f0
const L2L(::Type{Float32}) = 1.428606765330187045f-06
const TRIG_MAX(::Type{Float64}) = 1e14
const TRIG_MAX(::Type{Float32}) = 1f7
const SQRT_MAX(::Type{Float64}) = 1.3407807929942596355e154
const SQRT_MAX(::Type{Float32}) = 18446743523953729536f0
include("utils.jl") # utility functions
include("double.jl") # Dekker style double double functions
include("priv.jl") # private math functions
include("exp.jl") # exponential functions
include("log.jl") # logarithmic functions
include("trig.jl") # trigonometric and inverse trigonometric functions
include("hyp.jl") # hyperbolic and inverse hyperbolic functions
include("misc.jl") # miscallenous math functions including pow and cbrt
# fallback definitions
for func in (:sin, :cos, :tan, :sincos, :asin, :acos, :atan, :sinh, :cosh, :tanh,
:asinh, :acosh, :atanh, :log, :log2, :log10, :log1p, :exp, :exp2, :exp10, :expm1, :cbrt,
:sin_fast, :cos_fast, :tan_fast, :sincos_fast, :asin_fast, :acos_fast, :atan_fast, :atan2_fast, :log_fast, :cbrt_fast)
@eval begin
$func(a::Float16) = Float16.($func(Float32(a)))
$func(x::Real) = $func(float(x))
end
end
for func in (:atan, :hypot)
@eval begin
$func(y::Real, x::Real) = $func(promote(float(y), float(x))...)
$func(a::Float16, b::Float16) = Float16($func(Float32(a), Float32(b)))
end
end
ldexp(x::Float16, q::Int) = Float16(ldexpk(Float32(x), q))
end
| SLEEFInline | https://github.com/AStupidBear/SLEEFInline.jl.git |
|
[
"MIT"
] | 0.1.0 | 755459c8c724437026ade8eed1032572a5236752 | code | 7399 | import Base: -, <, copysign, flipsign, convert
struct Double{T<:IEEEFloat} <: Number
hi::T
lo::T
end
Double(x::T) where {T<:IEEEFloat} = Double(x, zero(T))
(::Type{T})(x::Double{T}) where {T<:IEEEFloat} = x.hi + x.lo
@inline trunclo(x::Float64) = reinterpret(Float64, reinterpret(UInt64, x) & 0xffff_ffff_f800_0000) # clear lower 27 bits (leave upper 26 bits)
@inline trunclo(x::Float32) = reinterpret(Float32, reinterpret(UInt32, x) & 0xffff_f000) # clear lowest 12 bits (leave upper 12 bits)
@inline function splitprec(x::IEEEFloat)
hx = trunclo(x)
hx, x - hx
end
@inline function dnormalize(x::Double{T}) where {T}
r = x.hi + x.lo
Double(r, (x.hi - r) + x.lo)
end
@inline flipsign(x::Double{T}, y::T) where {T<:IEEEFloat} = Double(flipsign(x.hi, y), flipsign(x.lo, y))
@inline scale(x::Double{T}, s::T) where {T<:IEEEFloat} = Double(s * x.hi, s * x.lo)
@inline (-)(x::Double{T}) where {T<:IEEEFloat} = Double(-x.hi, -x.lo)
@inline function (<)(x::Double{T}, y::Double{T}) where {T<:IEEEFloat}
x.hi < y.hi
end
@inline function (<)(x::Double{T}, y::Number) where {T<:IEEEFloat}
x.hi < y
end
@inline function (<)(x::Number, y::Double{T}) where {T<:IEEEFloat}
x < y.hi
end
# quick-two-sum x+y
@inline function dadd(x::T, y::T) where {T<:IEEEFloat} #WARNING |x| >= |y|
s = x + y
Double(s, (x - s) + y)
end
@inline function dadd(x::T, y::Double{T}) where {T<:IEEEFloat} #WARNING |x| >= |y|
s = x + y.hi
Double(s, (x - s) + y.hi + y.lo)
end
@inline function dadd(x::Double{T}, y::T) where {T<:IEEEFloat} #WARNING |x| >= |y|
s = x.hi + y
Double(s, (x.hi - s) + y + x.lo)
end
@inline function dadd(x::Double{T}, y::Double{T}) where {T<:IEEEFloat} #WARNING |x| >= |y|
s = x.hi + y.hi
Double(s, (x.hi - s) + y.hi + y.lo + x.lo)
end
@inline function dsub(x::Double{T}, y::Double{T}) where {T<:IEEEFloat} #WARNING |x| >= |y|
s = x.hi - y.hi
Double(s, (x.hi - s) - y.hi - y.lo + x.lo)
end
@inline function dsub(x::Double{T}, y::T) where {T<:IEEEFloat} #WARNING |x| >= |y|
s = x.hi - y
Double(s, (x.hi - s) - y + x.lo)
end
@inline function dsub(x::T, y::Double{T}) where {T<:IEEEFloat} #WARNING |x| >= |y|
s = x - y.hi
Double(s, (x - s) - y.hi - y.lo)
end
@inline function dsub(x::T, y::T) where {T<:IEEEFloat} #WARNING |x| >= |y|
s = x - y
Double(s, (x - s) - y)
end
# two-sum x+y NO BRANCH
@inline function dadd2(x::T, y::T) where {T<:IEEEFloat}
s = x + y
v = s - x
Double(s, (x - (s - v)) + (y - v))
end
@inline function dadd2(x::T, y::Double{T}) where {T<:IEEEFloat}
s = x + y.hi
v = s - x
Double(s, (x - (s - v)) + (y.hi - v) + y.lo)
end
@inline dadd2(x::Double{T}, y::T) where {T<:IEEEFloat} = dadd2(y, x)
@inline function dadd2(x::Double{T}, y::Double{T}) where {T<:IEEEFloat}
s = x.hi + y.hi
v = s - x.hi
Double(s, (x.hi - (s - v)) + (y.hi - v) + x.lo + y.lo)
end
@inline function dsub2(x::T, y::T) where {T<:IEEEFloat}
s = x - y
v = s - x
Double(s, (x - (s - v)) + (-y - v))
end
@inline function dsub2(x::T, y::Double{T}) where {T<:IEEEFloat}
s = x - y.hi
v = s - x
Double(s, (x - (s - v)) + (-y.hi - v) - y.lo)
end
@inline function dsub2(x::Double{T}, y::T) where {T<:IEEEFloat}
s = x.hi - y
v = s - x.hi
Double(s, (x.hi - (s - v)) + (-y - v) + x.lo)
end
@inline function dsub2(x::Double{T}, y::Double{T}) where {T<:IEEEFloat}
s = x.hi - y.hi
v = s - x.hi
Double(s, (x.hi - (s - v)) + (-y.hi - v) + x.lo - y.lo)
end
if FMA_FAST
# two-prod-fma
@inline function dmul(x::T, y::T) where {T<:IEEEFloat}
z = x * y
Double(z, fma(x, y, -z))
end
@inline function dmul(x::Double{T}, y::T) where {T<:IEEEFloat}
z = x.hi * y
Double(z, fma(x.hi, y, -z) + x.lo * y)
end
@inline dmul(x::T, y::Double{T}) where {T<:IEEEFloat} = dmul(y, x)
@inline function dmul(x::Double{T}, y::Double{T}) where {T<:IEEEFloat}
z = x.hi * y.hi
Double(z, fma(x.hi, y.hi, -z) + x.hi * y.lo + x.lo * y.hi)
end
# x^2
@inline function dsqu(x::T) where {T<:IEEEFloat}
z = x * x
Double(z, fma(x, x, -z))
end
@inline function dsqu(x::Double{T}) where {T<:IEEEFloat}
z = x.hi * x.hi
Double(z, fma(x.hi, x.hi, -z) + x.hi * (x.lo + x.lo))
end
# sqrt(x)
@inline function dsqrt(x::Double{T}) where {T<:IEEEFloat}
zhi = _sqrt(x.hi)
Double(zhi, (x.lo + fma(-zhi, zhi, x.hi)) / (zhi + zhi))
end
# x/y
@inline function ddiv(x::Double{T}, y::Double{T}) where {T<:IEEEFloat}
invy = 1 / y.hi
zhi = x.hi * invy
Double(zhi, (fma(-zhi, y.hi, x.hi) + fma(-zhi, y.lo, x.lo)) * invy)
end
@inline function ddiv(x::T, y::T) where {T<:IEEEFloat}
ry = 1 / y
r = x * ry
Double(r, fma(-r, y, x) * ry)
end
# 1/x
@inline function drec(x::T) where {T<:IEEEFloat}
zhi = 1 / x
Double(zhi, fma(-zhi, x, one(T)) * zhi)
end
@inline function drec(x::Double{T}) where {T<:IEEEFloat}
zhi = 1 / x.hi
Double(zhi, (fma(-zhi, x.hi, one(T)) + -zhi * x.lo) * zhi)
end
else
#two-prod x*y
@inline function dmul(x::T, y::T) where {T<:IEEEFloat}
hx, lx = splitprec(x)
hy, ly = splitprec(y)
z = x * y
Double(z, ((hx * hy - z) + lx * hy + hx * ly) + lx * ly)
end
@inline function dmul(x::Double{T}, y::T) where {T<:IEEEFloat}
hx, lx = splitprec(x.hi)
hy, ly = splitprec(y)
z = x.hi * y
Double(z, (hx * hy - z) + lx * hy + hx * ly + lx * ly + x.lo * y)
end
@inline dmul(x::T, y::Double{T}) where {T<:IEEEFloat} = dmul(y, x)
@inline function dmul(x::Double{T}, y::Double{T}) where {T<:IEEEFloat}
hx, lx = splitprec(x.hi)
hy, ly = splitprec(y.hi)
z = x.hi * y.hi
Double(z, (((hx * hy - z) + lx * hy + hx * ly) + lx * ly) + x.hi * y.lo + x.lo * y.hi)
end
# x^2
@inline function dsqu(x::T) where {T<:IEEEFloat}
hx, lx = splitprec(x)
z = x * x
Double(z, (hx * hx - z) + lx * (hx + hx) + lx * lx)
end
@inline function dsqu(x::Double{T}) where {T<:IEEEFloat}
hx, lx = splitprec(x.hi)
z = x.hi * x.hi
Double(z, (hx * hx - z) + lx * (hx + hx) + lx * lx + x.hi * (x.lo + x.lo))
end
# sqrt(x)
@inline function dsqrt(x::Double{T}) where {T<:IEEEFloat}
c = _sqrt(x.hi)
u = dsqu(c)
Double(c, (x.hi - u.hi - u.lo + x.lo) / (c + c))
end
# x/y
@inline function ddiv(x::Double{T}, y::Double{T}) where {T<:IEEEFloat}
invy = 1 / y.hi
c = x.hi * invy
u = dmul(c, y.hi)
Double(c, ((((x.hi - u.hi) - u.lo) + x.lo) - c * y.lo) * invy)
end
@inline function ddiv(x::T, y::T) where {T<:IEEEFloat}
ry = 1 / y
r = x * ry
hx, lx = splitprec(r)
hy, ly = splitprec(y)
Double(r, (((-hx * hy + r * y) - lx * hy - hx * ly) - lx * ly) * ry)
end
# 1/x
@inline function drec(x::T) where {T<:IEEEFloat}
c = 1 / x
u = dmul(c, x)
Double(c, (one(T) - u.hi - u.lo) * c)
end
@inline function drec(x::Double{T}) where {T<:IEEEFloat}
c = 1 / x.hi
u = dmul(c, x.hi)
Double(c, (one(T) - u.hi - u.lo - c * x.lo) * c)
end
end
| SLEEFInline | https://github.com/AStupidBear/SLEEFInline.jl.git |
|
[
"MIT"
] | 0.1.0 | 755459c8c724437026ade8eed1032572a5236752 | code | 5049 | # exported exponential functions
"""
ldexp(a, n)
Computes `a × 2^n`
"""
@inline ldexp(x::Union{Float32,Float64}, q::Int) = ldexpk(x, q)
const max_exp2(::Type{Float64}) = 1024
const max_exp2(::Type{Float32}) = 128f0
const min_exp2(::Type{Float64}) = -1075
const min_exp2(::Type{Float32}) = -150f0
@inline function exp2_kernel(x::Float64)
c11 = 0.4434359082926529454e-9
c10 = 0.7073164598085707425e-8
c9 = 0.1017819260921760451e-6
c8 = 0.1321543872511327615e-5
c7 = 0.1525273353517584730e-4
c6 = 0.1540353045101147808e-3
c5 = 0.1333355814670499073e-2
c4 = 0.9618129107597600536e-2
c3 = 0.5550410866482046596e-1
c2 = 0.2402265069591012214
c1 = 0.6931471805599452862
return @horner x c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11
end
@inline function exp2_kernel(x::Float32)
c6 = 0.1535920892f-3
c5 = 0.1339262701f-2
c4 = 0.9618384764f-2
c3 = 0.5550347269f-1
c2 = 0.2402264476f0
c1 = 0.6931471825f0
return @horner x c1 c2 c3 c4 c5 c6
end
"""
exp2(x)
Compute the base-`2` exponential of `x`, that is `2ˣ`.
"""
@inline function exp2(d::T) where {T<:Union{Float32,Float64}}
q = round(d)
qi = unsafe_trunc(Int, q)
s = d - q
u = exp2_kernel(s)
u = T(dnormalize(dadd(T(1.0), dmul(u,s))))
u = ldexp2k(u, qi)
d > max_exp2(T) && (u = T(Inf))
d < min_exp2(T) && (u = T(0.0))
return u
end
const max_exp10(::Type{Float64}) = 3.08254715559916743851e2 # log 2^1023*(2-2^-52)
const max_exp10(::Type{Float32}) = 38.531839419103626f0 # log 2^127 *(2-2^-23)
const min_exp10(::Type{Float64}) = -3.23607245338779784854769e2 # log10 2^-1075
const min_exp10(::Type{Float32}) = -45.15449934959718f0 # log10 2^-150
@inline function exp10_kernel(x::Float64)
c11 = 0.2411463498334267652e-3
c10 = 0.1157488415217187375e-2
c9 = 0.5013975546789733659e-2
c8 = 0.1959762320720533080e-1
c7 = 0.6808936399446784138e-1
c6 = 0.2069958494722676234e0
c5 = 0.5393829292058536229e0
c4 = 0.1171255148908541655e1
c3 = 0.2034678592293432953e1
c2 = 0.2650949055239205876e1
c1 = 0.2302585092994045901e1
return @horner x c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11
end
@inline function exp10_kernel(x::Float32)
c6 = 0.2064004987f0
c5 = 0.5417877436f0
c4 = 0.1171286821f1
c3 = 0.2034656048f1
c2 = 0.2650948763f1
c1 = 0.2302585125f1
return @horner x c1 c2 c3 c4 c5 c6
end
"""
exp10(x)
Compute the base-`10` exponential of `x`, that is `10ˣ`.
"""
@inline function exp10(d::T) where {T<:Union{Float32,Float64}}
q = round(T(MLOG10_2) * d)
qi = unsafe_trunc(Int, q)
s = muladd(q, -L10U(T), d)
s = muladd(q, -L10L(T), s)
u = exp10_kernel(s)
u = T(dnormalize(dadd(T(1.0), dmul(u,s))))
u = ldexp2k(u, qi)
d > max_exp10(T) && (u = T(Inf))
d < min_exp10(T) && (u = T(0.0))
return u
end
const max_expm1(::Type{Float64}) = 7.09782712893383996732e2 # log 2^1023*(2-2^-52)
const max_expm1(::Type{Float32}) = 88.72283905206835f0 # log 2^127 *(2-2^-23)
const min_expm1(::Type{Float64}) = -37.42994775023704434602223
const min_expm1(::Type{Float32}) = -17.3286790847778338076068394f0
"""
expm1(x)
Compute `eˣ- 1` accurately for small values of `x`.
"""
@inline function expm1(x::T) where {T<:Union{Float32,Float64}}
u = T(dadd2(expk2(Double(x)), -T(1.0)))
x > max_expm1(T) && (u = T(Inf))
x < min_expm1(T) && (u = -T(1.0))
isnegzero(x) && (u = T(-0.0))
return u
end
const max_exp(::Type{Float64}) = 709.78271114955742909217217426 # log 2^1023*(2-2^-52)
const max_exp(::Type{Float32}) = 88.72283905206835f0 # log 2^127 *(2-2^-23)
const min_exp(::Type{Float64}) = -7.451332191019412076235e2 # log 2^-1075
const min_exp(::Type{Float32}) = -103.97208f0 # ≈ log 2^-150
@inline function exp_kernel(x::Float64)
c11 = 2.08860621107283687536341e-09
c10 = 2.51112930892876518610661e-08
c9 = 2.75573911234900471893338e-07
c8 = 2.75572362911928827629423e-06
c7 = 2.4801587159235472998791e-05
c6 = 0.000198412698960509205564975
c5 = 0.00138888888889774492207962
c4 = 0.00833333333331652721664984
c3 = 0.0416666666666665047591422
c2 = 0.166666666666666851703837
c1 = 0.50
return @horner x c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11
end
@inline function exp_kernel(x::Float32)
c6 = 0.000198527617612853646278381f0
c5 = 0.00139304355252534151077271f0
c4 = 0.00833336077630519866943359f0
c3 = 0.0416664853692054748535156f0
c2 = 0.166666671633720397949219f0
c1 = 0.5f0
return @horner x c1 c2 c3 c4 c5 c6
end
"""
exp(x)
Compute the base-`e` exponential of `x`, that is `eˣ`.
"""
@inline function exp(d::T) where {T<:Union{Float32,Float64}}
q = round(T(MLN2E) * d)
qi = unsafe_trunc(Int, q)
s = muladd(q, -L2U(T), d)
s = muladd(q, -L2L(T), s)
u = exp_kernel(s)
u = s * s * u + s + 1
u = ldexp2k(u, qi)
d > max_exp(T) && (u = T(Inf))
d < min_exp(T) && (u = T(0))
return u
end
| SLEEFInline | https://github.com/AStupidBear/SLEEFInline.jl.git |
|
[
"MIT"
] | 0.1.0 | 755459c8c724437026ade8eed1032572a5236752 | code | 2471 | # exported hyperbolic functions
over_sch(::Type{Float64}) = 710.0
over_sch(::Type{Float32}) = 89f0
"""
sinh(x)
Compute hyperbolic sine of `x`.
"""
@inline function sinh(x::T) where {T<:Union{Float32,Float64}}
u = abs(x)
d = expk2(Double(u))
d = dsub(d, drec(d))
u = T(d) * T(0.5)
u = abs(x) > over_sch(T) ? T(Inf) : u
u = isnan(u) ? T(Inf) : u
u = flipsign(u, x)
u = isnan(x) ? T(NaN) : u
return u
end
"""
cosh(x)
Compute hyperbolic cosine of `x`.
"""
@inline function cosh(x::T) where {T<:Union{Float32,Float64}}
u = abs(x)
d = expk2(Double(u))
d = dadd(d, drec(d))
u = T(d) * T(0.5)
u = abs(x) > over_sch(T) ? T(Inf) : u
u = isnan(u) ? T(Inf) : u
u = isnan(x) ? T(NaN) : u
return u
end
over_th(::Type{Float64}) = 18.714973875
over_th(::Type{Float32}) = 18.714973875f0
"""
tanh(x)
Compute hyperbolic tangent of `x`.
"""
@inline function tanh(x::T) where {T<:Union{Float32,Float64}}
u = abs(x)
d = expk2(Double(u))
e = drec(d)
d = ddiv(dsub(d, e), dadd(d, e))
u = T(d)
u = abs(x) > over_th(T) ? T(1.0) : u
u = isnan(u) ? T(1) : u
u = flipsign(u, x)
u = isnan(x) ? T(NaN) : u
return u
end
"""
asinh(x)
Compute the inverse hyperbolic sine of `x`.
"""
@inline function asinh(x::T) where {T<:Union{Float32,Float64}}
y = abs(x)
d = y > 1 ? drec(x) : Double(y, T(0.0))
d = dsqrt(dadd2(dsqu(d), T(1.0)))
d = y > 1 ? dmul(d, y) : d
d = logk2(dnormalize(dadd(d, x)))
y = T(d)
y = (abs(x) > SQRT_MAX(T) || isnan(y)) ? flipsign(T(Inf), x) : y
y = isnan(x) ? T(NaN) : y
y = isnegzero(x) ? T(-0.0) : y
return y
end
"""
acosh(x)
Compute the inverse hyperbolic cosine of `x`.
"""
@inline function acosh(x::T) where {T<:Union{Float32,Float64}}
d = logk2(dadd2(dmul(dsqrt(dadd2(x, T(1.0))), dsqrt(dsub2(x, T(1.0)))), x))
y = T(d)
y = (x > SQRT_MAX(T) || isnan(y)) ? T(Inf) : y
y = x == T(1.0) ? T(0.0) : y
y = x < T(1.0) ? T(NaN) : y
y = isnan(x) ? T(NaN) : y
return y
end
"""
atanh(x)
Compute the inverse hyperbolic tangent of `x`.
"""
@inline function atanh(x::T) where {T<:Union{Float32,Float64}}
u = abs(x)
d = logk2(ddiv(dadd2(T(1.0), u), dsub2(T(1.0), u)))
u = u > T(1.0) ? T(NaN) : (u == T(1.0) ? T(Inf) : T(d) * T(0.5))
u = isinf(x) || isnan(u) ? T(NaN) : u
u = flipsign(u, x)
u = isnan(x) ? T(NaN) : u
return u
end
| SLEEFInline | https://github.com/AStupidBear/SLEEFInline.jl.git |
|
[
"MIT"
] | 0.1.0 | 755459c8c724437026ade8eed1032572a5236752 | code | 4768 | # exported logarithmic functions
const FP_ILOGB0 = typemin(Int)
const FP_ILOGBNAN = typemin(Int)
const INT_MAX = typemax(Int)
"""
ilogb(x)
Returns the integral part of the logarithm of `abs(x)`, using base 2 for the
logarithm. In other words, this computes the binary exponent of `x` such that
x = significand × 2^exponent,
where `significand ∈ [1, 2)`.
* Exceptional cases (where `Int` is the machine wordsize)
* `x = 0` returns `FP_ILOGB0`
* `x = ±Inf` returns `INT_MAX`
* `x = NaN` returns `FP_ILOGBNAN`
"""
function ilogb(x::T) where {T<:Union{Float32,Float64}}
e = ilogbk(abs(x))
x == 0 && (e = FP_ILOGB0)
isnan(x) && (e = FP_ILOGBNAN)
isinf(x) && (e = INT_MAX)
return e
end
"""
log10(x)
Returns the base `10` logarithm of `x`.
"""
@inline function log10(a::T) where {T<:Union{Float32,Float64}}
x = T(dmul(logk(a), MDLN10E(T)))
isinf(a) && (x = T(Inf))
(a < 0 || isnan(a)) && (x = T(NaN))
a == 0 && (x = T(-Inf))
return x
end
"""
log2(x)
Returns the base `2` logarithm of `x`.
"""
@inline function log2(a::T) where {T<:Union{Float32,Float64}}
u = T(dmul(logk(a), MDLN2E(T)))
isinf(a) && (u = T(Inf))
(a < 0 || isnan(a)) && (u = T(NaN))
a == 0 && (u = T(-Inf))
return u
end
const over_log1p(::Type{Float64}) = 1e307
const over_log1p(::Type{Float32}) = 1f38
"""
log1p(x)
Accurately compute the natural logarithm of 1+x.
"""
function log1p(a::T) where {T<:Union{Float32,Float64}}
x = T(logk2(dadd2(a, T(1.0))))
a > over_log1p(T) && (x = T(Inf))
a < -1 && (x = T(NaN))
a == -1 && (x = T(-Inf))
isnegzero(a) && (x = T(-0.0))
return x
end
@inline function log_kernel(x::Float64)
c7 = 0.1532076988502701353
c6 = 0.1525629051003428716
c5 = 0.1818605932937785996
c4 = 0.2222214519839380009
c3 = 0.2857142932794299317
c2 = 0.3999999999635251990
c1 = 0.6666666666667333541
return @horner x c1 c2 c3 c4 c5 c6 c7
end
@inline function log_kernel(x::Float32)
c3 = 0.3027294874f0
c2 = 0.3996108174f0
c1 = 0.6666694880f0
return @horner x c1 c2 c3
end
"""
log(x)
Compute the natural logarithm of `x`. The inverse of the natural logarithm is
the natural expoenential function `exp(x)`
"""
@inline function log(d::T) where {T<:Union{Float32,Float64}}
o = d < floatmin(T)
o && (d *= T(Int64(1) << 32) * T(Int64(1) << 32))
e = ilogb2k(d * T(1.0/0.75))
m = ldexp3k(d, -e)
o && (e -= 64)
x = ddiv(dadd2(T(-1.0), m), dadd2(T(1.0), m))
x2 = x.hi*x.hi
t = log_kernel(x2)
s = dmul(MDLN2(T), T(e))
s = dadd(s, scale(x, T(2.0)))
s = dadd(s, x2*x.hi*t)
r = T(s)
isinf(d) && (r = T(Inf))
(d < 0 || isnan(d)) && (r = T(NaN))
d == 0 && (r = -T(Inf))
return r
end
# First we split the argument to its mantissa `m` and integer exponent `e` so
# that `d = m \times 2^e`, where `m \in [0.5, 1)` then we apply the polynomial
# approximant on this reduced argument `m` before putting back the exponent
# in. This first part is done with the help of the private function
# `ilogbk(x)` and we put the exponent back using
# `\log(m \times 2^e) = \log(m) + \log 2^e = \log(m) + e\times MLN2
# The polynomial we evaluate is based on coefficients from
# `log_2(x) = 2\sum_{n=0}^\infty \frac{1}{2n+1} \bigl(\frac{x-1}{x+1}^{2n+1}\bigr)`
# That being said, since this converges faster when the argument is close to
# 1, we multiply `m` by `2` and subtract 1 for the exponent `e` when `m` is
# less than `sqrt(2)/2`
@inline function log_fast_kernel(x::Float64)
c8 = 0.153487338491425068243146
c7 = 0.152519917006351951593857
c6 = 0.181863266251982985677316
c5 = 0.222221366518767365905163
c4 = 0.285714294746548025383248
c3 = 0.399999999950799600689777
c2 = 0.6666666666667778740063
c1 = 2.0
return @horner x c1 c2 c3 c4 c5 c6 c7 c8
end
@inline function log_fast_kernel(x::Float32)
c5 = 0.2392828464508056640625f0
c4 = 0.28518211841583251953125f0
c3 = 0.400005877017974853515625f0
c2 = 0.666666686534881591796875f0
c1 = 2f0
return @horner x c1 c2 c3 c4 c5
end
"""
log_fast(x)
Compute the natural logarithm of `x`. The inverse of the natural logarithm is
the natural expoenential function `exp(x)`
"""
@inline function log_fast(d::T) where {T<:Union{Float32,Float64}}
o = d < floatmin(T)
o && (d *= T(Int64(1) << 32) * T(Int64(1) << 32))
e = ilogb2k(d * T(1.0/0.75))
m = ldexp3k(d, -e)
o && (e -= 64)
x = (m - 1) / (m + 1)
x2 = x * x
t = log_fast_kernel(x2)
x = x * t + T(MLN2) * e
isinf(d) && (x = T(Inf))
(d < 0 || isnan(d)) && (x = T(NaN))
d == 0 && (x = -T(Inf))
return x
end
| SLEEFInline | https://github.com/AStupidBear/SLEEFInline.jl.git |
|
[
"MIT"
] | 0.1.0 | 755459c8c724437026ade8eed1032572a5236752 | code | 2892 |
"""
pow(x, y)
Exponentiation operator, returns `x` raised to the power `y`.
"""
@inline function pow(x::T, y::T) where {T<:Union{Float32,Float64}}
yi = unsafe_trunc(Int, y)
yisint = yi == y
yisodd = isodd(yi) && yisint
result = expk(dmul(logk(abs(x)), y))
result = isnan(result) ? T(Inf) : result
result *= (x > 0 ? T(1.0) : (!yisint ? T(NaN) : (yisodd ? -T(1.0) : T(1.0))))
efx = flipsign(abs(x) - 1, y)
isinf(y) && (result = efx < 0 ? T(0.0) : (efx == 0 ? T(1.0) : T(Inf)))
(isinf(x) || x == 0) && (result = (yisodd ? _sign(x) : T(1.0)) * ((x == 0 ? -y : y) < 0 ? T(0.0) : T(Inf)))
(isnan(x) || isnan(y)) && (result = T(NaN))
(y == 0 || x == 1) && (result = T(1.0))
return result
end
let
global cbrt_fast
global cbrt
c6d = -0.640245898480692909870982
c5d = 2.96155103020039511818595
c4d = -5.73353060922947843636166
c3d = 6.03990368989458747961407
c2d = -3.85841935510444988821632
c1d = 2.2307275302496609725722
c6f = -0.601564466953277587890625f0
c5f = 2.8208892345428466796875f0
c4f = -5.532182216644287109375f0
c3f = 5.898262500762939453125f0
c2f = -3.8095417022705078125f0
c1f = 2.2241256237030029296875f0
global @inline cbrt_kernel(x::Float64) = @horner x c1d c2d c3d c4d c5d c6d
global @inline cbrt_kernel(x::Float32) = @horner x c1f c2f c3f c4f c5f c6f
"""
cbrt_fast(x)
Return `x^{1/3}`.
"""
@inline function cbrt_fast(d::T) where {T<:Union{Float32,Float64}}
e = ilogbk(abs(d)) + 1
d = ldexp2k(d, -e)
r = (e + 6144) % 3
q = r == 1 ? T(M2P13) : T(1)
q = r == 2 ? T(M2P23) : q
q = ldexp2k(q, (e + 6144) ÷ 3 - 2048)
q = flipsign(q, d)
d = abs(d)
x = cbrt_kernel(d)
y = x * x
y = y * y
x -= (d * y - x) * T(1 / 3)
y = d * x * x
y = (y - T(2 / 3) * y * (y * x - 1)) * q
end
"""
cbrt(x)
Return `x^{1/3}`. The prefix operator `∛` is equivalent to `cbrt`.
"""
@inline function cbrt(d::T) where {T<:Union{Float32,Float64}}
e = ilogbk(abs(d)) + 1
d = ldexp2k(d, -e)
r = (e + 6144) % 3
q2 = r == 1 ? MD2P13(T) : Double(T(1))
q2 = r == 2 ? MD2P23(T) : q2
q2 = flipsign(q2, d)
d = abs(d)
x = cbrt_kernel(d)
y = x * x
y = y * y
x -= (d * y - x) * T(1 / 3)
z = x
u = dsqu(x)
u = dsqu(u)
u = dmul(u, d)
u = dsub(u, x)
y = T(u)
y = -T(2 / 3) * y * z
v = dadd(dsqu(z), y)
v = dmul(v, d)
v = dmul(v, q2)
z = ldexp2k(T(v), (e + 6144) ÷ 3 - 2048)
isinf(d) && (z = flipsign(T(Inf), q2.hi))
d == 0 && (z = flipsign(T(0), q2.hi))
return z
end
end
"""
hypot(x,y)
Compute the hypotenuse `\\sqrt{x^2+y^2}` avoiding overflow and underflow.
"""
@inline function hypot(x::T, y::T) where {T<:IEEEFloat}
x = abs(x)
y = abs(y)
if x < y
x, y = y, x
end
r = (x == 0) ? y : y / x
x * sqrt(T(1.0) + r * r)
end
| SLEEFInline | https://github.com/AStupidBear/SLEEFInline.jl.git |
|
[
"MIT"
] | 0.1.0 | 755459c8c724437026ade8eed1032572a5236752 | code | 9650 | # private math functions
"""
A helper function for `ldexpk`
First note that `r = (q >> n) << n` clears the lowest n bits of q, i.e. returns 2^n where n is the
largest integer such that q >= 2^n
For numbers q less than 2^m the following code does the same as the above snippet
`r = ( (q>>v + q) >> n - q>>v ) << n`
For numbers larger than or equal to 2^v this subtracts 2^n from q for q>>n times.
The function returns q(input) := q(output) + offset*r
In the code for ldexpk we actually use
`m = ( (m>>n + m) >> n - m>>m ) << (n-2)`.
So that x has to be multplied by u four times `x = x*u*u*u*u` to put the value of the offset
exponent amount back in.
"""
@inline function _split_exponent(q, n, v, offset)
m = q >> v
m = (((m + q) >> n) - m) << (n - offset)
q = q - (m << offset)
m, q
end
@inline split_exponent(::Type{Float64}, q::Int) = _split_exponent(q, UInt(9), UInt(31), UInt(2))
@inline split_exponent(::Type{Float32}, q::Int) = _split_exponent(q, UInt(6), UInt(31), UInt(2))
"""
ldexpk(a, n)
Computes `a × 2^n`.
"""
@inline function ldexpk(x::T, q::Int) where {T<:Union{Float32,Float64}}
bias = exponent_bias(T)
emax = exponent_raw_max(T)
m, q = split_exponent(T, q)
m += bias
m = ifelse(m < 0, 0, m)
m = ifelse(m > emax, emax, m)
q += bias
u = integer2float(T, m)
x = x * u * u * u * u
u = integer2float(T, q)
x * u
end
@inline function ldexp2k(x::T, e::Int) where {T<:Union{Float32,Float64}}
x * pow2i(T, e >> 1) * pow2i(T, e - (e >> 1))
end
@inline function ldexp3k(x::T, e::Int) where {T<:Union{Float32,Float64}}
reinterpret(T, reinterpret(Unsigned, x) + (Int64(e) << significand_bits(T)) % uinttype(T))
end
# threshold values for `ilogbk`
const threshold_exponent(::Type{Float64}) = 300
const threshold_exponent(::Type{Float32}) = 64
"""
ilogbk(x) -> Int
Returns the integral part of the logarithm of `|x|`, using 2 as base for the logarithm; in other
words this returns the binary exponent of `x` so that
x = significand × 2^exponent
where `significand ∈ [1, 2)`.
"""
@inline function ilogbk(d::T) where {T<:Union{Float32,Float64}}
m = d < T(2)^-threshold_exponent(T)
d = ifelse(m, d * T(2)^threshold_exponent(T), d)
q = float2integer(d) & exponent_raw_max(T)
q = ifelse(m, q - (threshold_exponent(T) + exponent_bias(T)), q - exponent_bias(T))
end
# similar to ilogbk, but argument has to be a normalized float value
@inline function ilogb2k(d::T) where {T<:Union{Float32,Float64}}
(float2integer(d) & exponent_raw_max(T)) - exponent_bias(T)
end
let
global atan2k_fast
global atan2k
c20d = 1.06298484191448746607415e-05
c19d = -0.000125620649967286867384336
c18d = 0.00070557664296393412389774
c17d = -0.00251865614498713360352999
c16d = 0.00646262899036991172313504
c15d = -0.0128281333663399031014274
c14d = 0.0208024799924145797902497
c13d = -0.0289002344784740315686289
c12d = 0.0359785005035104590853656
c11d = -0.041848579703592507506027
c10d = 0.0470843011653283988193763
c9d = -0.0524914210588448421068719
c8d = 0.0587946590969581003860434
c7d = -0.0666620884778795497194182
c6d = 0.0769225330296203768654095
c5d = -0.0909090442773387574781907
c4d = 0.111111108376896236538123
c3d = -0.142857142756268568062339
c2d = 0.199999999997977351284817
c1d = -0.333333333333317605173818
c9f = -0.00176397908944636583328247f0
c8f = 0.0107900900766253471374512f0
c7f = -0.0309564601629972457885742f0
c6f = 0.0577365085482597351074219f0
c5f = -0.0838950723409652709960938f0
c4f = 0.109463557600975036621094f0
c3f = -0.142626821994781494140625f0
c2f = 0.199983194470405578613281f0
c1f = -0.333332866430282592773438f0
global @inline atan2k_fast_kernel(x::Float64) = @horner x c1d c2d c3d c4d c5d c6d c7d c8d c9d c10d c11d c12d c13d c14d c15d c16d c17d c18d c19d c20d
global @inline atan2k_fast_kernel(x::Float32) = @horner x c1f c2f c3f c4f c5f c6f c7f c8f c9f
@inline function atan2k_fast(y::T, x::T) where {T<:Union{Float32,Float64}}
q = 0
if x < 0
x = -x
q = -2
end
if y > x
t = x; x = y
y = -t
q += 1
end
s = y / x
t = s * s
u = atan2k_fast_kernel(t)
t = u * t * s + s
q * T(PI_2) + t
end
global @inline atan2k_kernel(x::Double{Float64}) = @horner x.hi c1d c2d c3d c4d c5d c6d c7d c8d c9d c10d c11d c12d c13d c14d c15d c16d c17d c18d c19d c20d
global @inline atan2k_kernel(x::Double{Float32}) = dadd(c1f, x.hi * (@horner x.hi c2f c3f c4f c5f c6f c7f c8f c9f))
@inline function atan2k(y::Double{T}, x::Double{T}) where {T<:Union{Float32,Float64}}
q = 0
if x < 0
x = -x
q = -2
end
if y > x
t = x; x = y
y = -t
q += 1
end
s = ddiv(y, x)
t = dsqu(s)
t = dnormalize(t)
u = atan2k_kernel(t)
t = dmul(t, u)
t = dmul(s, dadd(T(1.0), t))
T <: Float64 && abs(s.hi) < 1e-200 && (t = s)
t = dadd(dmul(T(q), MDPI2(T)), t)
return t
end
end
const under_expk(::Type{Float64}) = -1000.0
const under_expk(::Type{Float32}) = -104f0
@inline function expk_kernel(x::Float64)
c10 = 2.51069683420950419527139e-08
c9 = 2.76286166770270649116855e-07
c8 = 2.75572496725023574143864e-06
c7 = 2.48014973989819794114153e-05
c6 = 0.000198412698809069797676111
c5 = 0.0013888888939977128960529
c4 = 0.00833333333332371417601081
c3 = 0.0416666666665409524128449
c2 = 0.166666666666666740681535
c1 = 0.500000000000000999200722
return @horner x c1 c2 c3 c4 c5 c6 c7 c8 c9 c10
end
@inline function expk_kernel(x::Float32)
c5 = 0.00136324646882712841033936f0
c4 = 0.00836596917361021041870117f0
c3 = 0.0416710823774337768554688f0
c2 = 0.166665524244308471679688f0
c1 = 0.499999850988388061523438f0
return @horner x c1 c2 c3 c4 c5
end
@inline function expk(d::Double{T}) where {T<:Union{Float32,Float64}}
q = round(T(d) * T(MLN2E))
qi = unsafe_trunc(Int, q)
s = dadd(d, -q * L2U(T))
s = dadd(s, -q * L2L(T))
s = dnormalize(s)
u = expk_kernel(T(s))
t = dadd(s, dmul(dsqu(s), u))
t = dadd(T(1.0), t)
u = ldexpk(T(t), qi)
(d.hi < under_expk(T)) && (u = T(0.0))
return u
end
@inline function expk2_kernel(x::Double{Float64})
c11 = 0.1602472219709932072e-9
c10 = 0.2092255183563157007e-8
c9 = 0.2505230023782644465e-7
c8 = 0.2755724800902135303e-6
c7 = 0.2755731892386044373e-5
c6 = 0.2480158735605815065e-4
c5 = 0.1984126984148071858e-3
c4 = 0.1388888888886763255e-2
c3 = 0.8333333333333347095e-2
c2 = 0.4166666666666669905e-1
c1 = 0.1666666666666666574e0
u = @horner x.hi c2 c3 c4 c5 c6 c7 c8 c9 c10 c11
return dadd(dmul(x, u), c1)
end
@inline function expk2_kernel(x::Double{Float32})
c5 = 0.1980960224f-3
c4 = 0.1394256484f-2
c3 = 0.8333456703f-2
c2 = 0.4166637361f-1
c1 = 0.166666659414234244790680580464f0
u = @horner x.hi c2 c3 c4 c5
return dadd(dmul(x, u), c1)
end
@inline function expk2(d::Double{T}) where {T<:Union{Float32,Float64}}
q = round(T(d) * T(MLN2E))
qi = unsafe_trunc(Int, q)
s = dadd(d, -q * L2U(T))
s = dadd(s, -q * L2L(T))
t = expk2_kernel(s)
t = dadd(dmul(s, t), T(0.5))
t = dadd(s, dmul(dsqu(s), t))
t = dadd(T(1.0), t)
t = Double(ldexp2k(t.hi, qi), ldexp2k(t.lo, qi))
(d.hi < under_expk(T)) && (t = Double(T(0.0)))
return t
end
@inline function logk2_kernel(x::Float64)
c8 = 0.13860436390467167910856
c7 = 0.131699838841615374240845
c6 = 0.153914168346271945653214
c5 = 0.181816523941564611721589
c4 = 0.22222224632662035403996
c3 = 0.285714285511134091777308
c2 = 0.400000000000914013309483
c1 = 0.666666666666664853302393
return @horner x c1 c2 c3 c4 c5 c6 c7 c8
end
@inline function logk2_kernel(x::Float32)
c4 = 0.240320354700088500976562f0
c3 = 0.285112679004669189453125f0
c2 = 0.400007992982864379882812f0
c1 = 0.666666686534881591796875f0
return @horner x c1 c2 c3 c4
end
@inline function logk2(d::Double{T}) where {T<:Union{Float32,Float64}}
e = ilogbk(d.hi * T(1.0/0.75))
m = scale(d, pow2i(T, -e))
x = ddiv(dsub2(m, T(1.0)), dadd2(m, T(1.0)))
x2 = dsqu(x)
t = logk2_kernel(x2.hi)
s = dmul(MDLN2(T), T(e))
s = dadd(s, scale(x, T(2.0)))
s = dadd(s, dmul(dmul(x2, x), t))
return s
end
@inline function logk_kernel(x::Double{Float64})
c10 = 0.116255524079935043668677
c9 = 0.103239680901072952701192
c8 = 0.117754809412463995466069
c7 = 0.13332981086846273921509
c6 = 0.153846227114512262845736
c5 = 0.181818180850050775676507
c4 = 0.222222222230083560345903
c3 = 0.285714285714249172087875
c2 = 0.400000000000000077715612
c1 = Double(0.666666666666666629659233, 3.80554962542412056336616e-17)
dadd2(dmul(x, @horner x.hi c2 c3 c4 c5 c6 c7 c8 c9 c10), c1)
end
@inline function logk_kernel(x::Double{Float32})
c4 = 0.240320354700088500976562f0
c3 = 0.285112679004669189453125f0
c2 = 0.400007992982864379882812f0
c1 = Double(0.66666662693023681640625f0, 3.69183861259614332084311f-9)
dadd2(dmul(x, @horner x.hi c2 c3 c4), c1)
end
@inline function logk(d::T) where {T<:Union{Float32,Float64}}
o = d < floatmin(T)
o && (d *= T(Int64(1) << 32) * T(Int64(1) << 32))
e = ilogb2k(d * T(1.0/0.75))
m = ldexp3k(d, -e)
o && (e -= 64)
x = ddiv(dsub2(m, T(1.0)), dadd2(T(1.0), m))
x2 = dsqu(x)
t = logk_kernel(x2)
s = dmul(MDLN2(T), T(e))
s = dadd(s, scale(x, T(2.0)))
s = dadd(s, dmul(dmul(x2, x), t))
return s
end
| SLEEFInline | https://github.com/AStupidBear/SLEEFInline.jl.git |
|
[
"MIT"
] | 0.1.0 | 755459c8c724437026ade8eed1032572a5236752 | code | 21915 | # exported trigonometric functions
"""
sin(x)
Compute the sine of `x`, where the output is in radians.
"""
function sin end
"""
cos(x)
Compute the cosine of `x`, where the output is in radians.
"""
function cos end
@inline function sincos_kernel(x::Double{Float64})
c8 = 2.72052416138529567917983e-15
c7 = -7.64292594113954471900203e-13
c6 = 1.60589370117277896211623e-10
c5 = -2.5052106814843123359368e-08
c4 = 2.75573192104428224777379e-06
c3 = -0.000198412698412046454654947
c2 = 0.00833333333333318056201922
c1 = -0.166666666666666657414808
return dadd(c1, x.hi * (@horner x.hi c2 c3 c4 c5 c6 c7 c8))
end
@inline function sincos_kernel(x::Double{Float32})
c4 = 2.6083159809786593541503f-06
c3 = -0.0001981069071916863322258f0
c2 = 0.00833307858556509017944336f0
c1 = -0.166666597127914428710938f0
return dadd(c1, x.hi * (@horner x.hi c2 c3 c4))
end
@inline function sin(d::T) where {T<:Float64}
qh = trunc(d * (T(M_1_PI) / (1 << 24)))
ql = round(d * T(M_1_PI) - qh * (1 << 24))
s = dadd2(d, qh * (-PI_A(T) * (1 << 24)))
s = dadd2(s, ql * (-PI_A(T) ))
s = dadd2(s, qh * (-PI_B(T) * (1 << 24)))
s = dadd2(s, ql * (-PI_B(T) ))
s = dadd2(s, qh * (-PI_C(T) * (1 << 24)))
s = dadd2(s, ql * (-PI_C(T) ))
s = dadd2(s, (qh * (1 << 24) + ql) * - PI_D(T))
t = s
s = dsqu(s)
w = sincos_kernel(s)
v = dmul(t, dadd(T(1.0), dmul(w, s)))
u = T(v)
qli = unsafe_trunc(Int, ql)
qli & 1 != 0 && (u = -u)
!isinf(d) && (isnegzero(d) || abs(d) > TRIG_MAX(T)) && (u = T(-0.0))
return u
end
@inline function sin(d::T) where {T<:Float32}
q = round(d * T(M_1_PI))
s = dadd2(d, q * -PI_A(T))
s = dadd2(s, q * -PI_B(T))
s = dadd2(s, q * -PI_C(T))
s = dadd2(s, q * -PI_D(T))
t = s
s = dsqu(s)
w = sincos_kernel(s)
v = dmul(t, dadd(T(1.0), dmul(w, s)))
u = T(v)
qi = unsafe_trunc(Int, q)
qi & 1 != 0 && (u = -u)
!isinf(d) && (isnegzero(d) || abs(d) > TRIG_MAX(T)) && (u = T(-0.0))
return u
end
@inline function cos(d::T) where {T<:Float64}
d = abs(d)
qh = trunc(d * (T(M_1_PI) / (1 << 23)) - T(0.5) * (T(M_1_PI) / (1 << 23)))
ql = 2*round(d * T(M_1_PI) - T(0.5) - qh * (1 << 23)) + 1
s = dadd2(d, qh * (-PI_A(T)* T(0.5) * (1 << 24)))
s = dadd2(s, ql * (-PI_A(T)* T(0.5) ))
s = dadd2(s, qh * (-PI_B(T)* T(0.5) * (1 << 24)))
s = dadd2(s, ql * (-PI_B(T)* T(0.5) ))
s = dadd2(s, qh * (-PI_C(T)* T(0.5) * (1 << 24)))
s = dadd2(s, ql * (-PI_C(T)* T(0.5) ))
s = dadd2(s, (qh * (1 << 24) + ql) * (-PI_D(T) * T(0.5)))
t = s
s = dsqu(s)
w = sincos_kernel(s)
v = dmul(t, dadd(T(1.0), dmul(w, s)))
u = T(v)
qli = unsafe_trunc(Int, ql)
qli & 2 == 0 && (u = -u)
!isinf(d) && (d > TRIG_MAX(T)) && (u = T(0.0))
return u
end
@inline function cos(d::T) where {T<:Float32}
d = abs(d)
q = 1 + 2*round(d * T(M_1_PI) - T(0.5))
s = dadd2(d, q * -PI_A(T)* T(0.5))
s = dadd2(s, q * -PI_B(T)* T(0.5))
s = dadd2(s, q * -PI_C(T)* T(0.5))
s = dadd2(s, q * -PI_D(T)* T(0.5))
t = s
s = dsqu(s)
w = sincos_kernel(s)
v = dmul(t, dadd(T(1.0), dmul(w, s)))
u = T(v)
qi = unsafe_trunc(Int, q)
qi & 2 == 0 && (u = -u)
!isinf(d) && (d > TRIG_MAX(T)) && (u = T(0.0))
return u
end
"""
sin_fast(x)
Compute the sine of `x`, where the output is in radians.
"""
function sin_fast end
"""
cos_fast(x)
Compute the cosine of `x`, where the output is in radians.
"""
function cos_fast end
# Argument is first reduced to the domain 0 < s < π/4
# We return the correct sign using `q & 1 != 0` i.e. q is odd (this works for
# positive and negative q) and if this condition is true we flip the sign since
# we are now in the negative branch of sin(x). Recall that q is just the integer
# part of d/π and thus we can determine the correct sign using this information.
@inline function sincos_fast_kernel(x::Float64)
c9 = -7.97255955009037868891952e-18
c8 = 2.81009972710863200091251e-15
c7 = -7.64712219118158833288484e-13
c6 = 1.60590430605664501629054e-10
c5 = -2.50521083763502045810755e-08
c4 = 2.75573192239198747630416e-06
c3 = -0.000198412698412696162806809
c2 = 0.00833333333333332974823815
c1 = -0.166666666666666657414808
return @horner x c1 c2 c3 c4 c5 c6 c7 c8 c9
end
@inline function sincos_fast_kernel(x::Float32)
c4 = 2.6083159809786593541503f-06
c3 = -0.0001981069071916863322258f0
c2 = 0.00833307858556509017944336f0
c1 = -0.166666597127914428710938f0
return @horner x c1 c2 c3 c4
end
@inline function sin_fast(d::T) where {T<:Float64}
t = d
qh = trunc(d * (T(M_1_PI) / (1 << 24)))
ql = round(d * T(M_1_PI) - qh * (1 << 24))
d = muladd(qh , -PI_A(T) * (1 << 24) , d)
d = muladd(ql , -PI_A(T) , d)
d = muladd(qh , -PI_B(T) * (1 << 24) , d)
d = muladd(ql , -PI_B(T) , d)
d = muladd(qh , -PI_C(T) * (1 << 24) , d)
d = muladd(ql , -PI_C(T) , d)
d = muladd(qh * (1 << 24) + ql, -PI_D(T), d)
s = d * d
qli = unsafe_trunc(Int, ql)
qli & 1 != 0 && (d = -d)
u = sincos_fast_kernel(s)
u = muladd(s, u * d, d)
!isinf(t) && (isnegzero(t) || abs(t) > TRIG_MAX(T)) && (u = T(-0.0))
return u
end
@inline function sin_fast(d::T) where {T<:Float32}
t = d
q = round(d * T(M_1_PI))
d = muladd(q , -PI_A(T), d)
d = muladd(q , -PI_B(T), d)
d = muladd(q , -PI_C(T), d)
d = muladd(q , -PI_D(T), d)
s = d * d
qli = unsafe_trunc(Int, q)
qli & 1 != 0 && (d = -d)
u = sincos_fast_kernel(s)
u = muladd(s, u * d, d)
!isinf(t) && (isnegzero(t) || abs(t) > TRIG_MAX(T)) && (u = T(-0.0))
return u
end
@inline function cos_fast(d::T) where {T<:Float64}
t = d
qh = trunc(d * (T(M_1_PI) / (1 << 23)) - T(0.5) * (T(M_1_PI) / (1 << 23)))
ql = 2*round(d * T(M_1_PI) - T(0.5) - qh * (1 << 23)) + 1
d = muladd(qh , -PI_A(T) * T(0.5) * (1 << 24) , d)
d = muladd(ql , -PI_A(T) * T(0.5) , d)
d = muladd(qh , -PI_B(T) * T(0.5) * (1 << 24) , d)
d = muladd(ql , -PI_B(T) * T(0.5) , d)
d = muladd(qh , -PI_C(T) * T(0.5) * (1 << 24) , d)
d = muladd(ql , -PI_C(T) * T(0.5) , d)
d = muladd(qh * (1 << 24) + ql, -PI_D(T) * T(0.5), d)
s = d * d
qli = unsafe_trunc(Int, ql)
qli & 2 == 0 && (d = -d)
u = sincos_fast_kernel(s)
u = muladd(s, u * d, d)
!isinf(t) && (abs(t) > TRIG_MAX(T)) && (u = T(0.0))
return u
end
@inline function cos_fast(d::T) where {T<:Float32}
t = d
q = 1 + 2*round(d * T(M_1_PI) - T(0.5))
d = muladd(q, -PI_A(T) * T(0.5), d)
d = muladd(q, -PI_B(T) * T(0.5), d)
d = muladd(q, -PI_C(T) * T(0.5), d)
d = muladd(q, -PI_D(T) * T(0.5), d)
s = d * d
qi = unsafe_trunc(Int, q)
qi & 2 == 0 && (d = -d)
u = sincos_fast_kernel(s)
u = muladd(s, u * d, d)
!isinf(t) && (abs(t) > TRIG_MAX(T)) && (u = T(0.0))
return u
end
"""
sincos(x)
Compute the sin and cosine of `x` simultaneously, where the output is in
radians, returning a tuple.
"""
function sincos end
"""
sincos_fast(x)
Compute the sin and cosine of `x` simultaneously, where the output is in
radians, returning a tuple.
"""
function sincos_fast end
@inline function sincos_a_kernel(x::Float64)
a6 = 1.58938307283228937328511e-10
a5 = -2.50506943502539773349318e-08
a4 = 2.75573131776846360512547e-06
a3 = -0.000198412698278911770864914
a2 = 0.0083333333333191845961746
a1 = -0.166666666666666130709393
return @horner x a1 a2 a3 a4 a5 a6
end
@inline function sincos_a_kernel(x::Float32)
a3 = -0.000195169282960705459117889f0
a2 = 0.00833215750753879547119141f0
a1 = -0.166666537523269653320312f0
return @horner x a1 a2 a3
end
@inline function sincos_b_kernel(x::Float64)
b7 = -1.13615350239097429531523e-11
b6 = 2.08757471207040055479366e-09
b5 = -2.75573144028847567498567e-07
b4 = 2.48015872890001867311915e-05
b3 = -0.00138888888888714019282329
b2 = 0.0416666666666665519592062
b1 = -0.50
return @horner x b1 b2 b3 b4 b5 b6 b7
end
@inline function sincos_b_kernel(x::Float32)
b5 = -2.71811842367242206819355f-07
b4 = 2.47990446951007470488548f-05
b3 = -0.00138888787478208541870117f0
b2 = 0.0416666641831398010253906f0
b1 = -0.5f0
return @horner x b1 b2 b3 b4 b5
end
@inline function sincos_fast(d::T) where {T<:Float64}
s = d
qh = trunc(d * ((2 * T(M_1_PI)) / (1 << 24)))
ql = round(d * (2 * T(M_1_PI)) - qh * (1 << 24))
s = muladd(qh, -PI_A(T) * T(0.5) * (1 << 24), s)
s = muladd(ql, -PI_A(T) * T(0.5), s)
s = muladd(qh, -PI_B(T) * T(0.5) * (1 << 24), s)
s = muladd(ql, -PI_B(T) * T(0.5), s)
s = muladd(qh, -PI_C(T) * T(0.5) * (1 << 24), s)
s = muladd(ql, -PI_C(T) * T(0.5), s)
s = muladd(qh * (1 << 24) + ql, -PI_D(T) * 0.5, s)
t = s
s = s * s
u = sincos_a_kernel(s)
u = u * s * t
rx = t + u
isnegzero(d) && (rx = T(-0.0))
u = sincos_b_kernel(s)
ry = u * s + T(1.0)
qli = unsafe_trunc(Int, ql)
qli & 1 != 0 && (s = ry; ry = rx; rx = s)
qli & 2 != 0 && (rx = -rx)
(qli + 1) & 2 != 0 && (ry = -ry)
abs(d) > TRIG_MAX(T) && (rx = ry = T(0.0))
isinf(d) && (rx = ry = T(NaN))
return (rx, ry)
end
@inline function sincos_fast(d::T) where {T<:Float32}
s = d
q = round(d * (2 * T(M_1_PI)))
s = muladd(q, -PI_A(T) * T(0.5), s)
s = muladd(q, -PI_B(T) * T(0.5), s)
s = muladd(q, -PI_C(T) * T(0.5), s)
s = muladd(q, -PI_D(T) * T(0.5), s)
t = s
s = s * s
u = sincos_a_kernel(s)
u = u * s * t
rx = t + u
isnegzero(d) && (rx = T(-0.0))
u = sincos_b_kernel(s)
ry = u * s + T(1.0)
qi = unsafe_trunc(Int, q)
qi & 1 != 0 && (s = ry; ry = rx; rx = s)
qi & 2 != 0 && (rx = -rx)
(qi + 1) & 2 != 0 && (ry = -ry)
abs(d) > TRIG_MAX(T) && (rx = ry = T(0.0))
isinf(d) && (rx = ry = T(NaN))
return (rx, ry)
end
@inline function sincos(d::T) where {T<:Float64}
qh = trunc(d * ((2 * T(M_1_PI)) / (1 << 24)))
ql = round(d * (2 * T(M_1_PI)) - qh * (1 << 24))
s = dadd2(d, qh * (-PI_A(T) * T(0.5) * (1 << 24)))
s = dadd2(s, ql * (-PI_A(T) * T(0.5) ))
s = dadd2(s, qh * (-PI_B(T) * T(0.5) * (1 << 24)))
s = dadd2(s, ql * (-PI_B(T) * T(0.5) ))
s = dadd2(s, qh * (-PI_C(T) * T(0.5) * (1 << 24)))
s = dadd2(s, ql * (-PI_C(T) * T(0.5) ))
s = dadd2(s, (qh * (1 << 24) + ql) * (-PI_D(T) * T(0.5)))
t = s
s = dsqu(s)
sx = T(s)
u = sincos_a_kernel(sx)
u *= sx * t.hi
v = dadd(t, u)
rx = T(v)
isnegzero(d) && (rx = T(-0.0))
u = sincos_b_kernel(sx)
v = dadd(T(1.0), dmul(sx, u))
ry = T(v)
qli = unsafe_trunc(Int, ql)
qli & 1 != 0 && (u = ry; ry = rx; rx = u)
qli & 2 != 0 && (rx = -rx)
(qli + 1) & 2 != 0 && (ry = -ry)
abs(d) > TRIG_MAX(T) && (rx = ry = T(0.0))
isinf(d) && (rx = ry = T(NaN))
return (rx, ry)
end
@inline function sincos(d::T) where {T<:Float32}
q = round(d * (2 * T(M_1_PI)))
s = dadd2(d, q * (-PI_A(T) * T(0.5)))
s = dadd2(s, q * (-PI_B(T) * T(0.5)))
s = dadd2(s, q * (-PI_C(T) * T(0.5)))
s = dadd2(s, q * (-PI_D(T) * T(0.5)))
t = s
s = dsqu(s)
sx = T(s)
u = sincos_a_kernel(sx)
u *= sx * t.hi
v = dadd(t, u)
rx = T(v)
isnegzero(d) && (rx = T(-0.0))
u = sincos_b_kernel(sx)
v = dadd(T(1.0), dmul(sx, u))
ry = T(v)
qi = unsafe_trunc(Int, q)
qi & 1 != 0 && (u = ry; ry = rx; rx = u)
qi & 2 != 0 && (rx = -rx)
(qi + 1) & 2 != 0 && (ry = -ry)
abs(d) > TRIG_MAX(T) && (rx = ry = T(0.0))
isinf(d) && (rx = ry = T(NaN))
return (rx, ry)
end
"""
tan(x)
Compute the tangent of `x`, where the output is in radians.
"""
function tan end
"""
tan_fast(x)
Compute the tangent of `x`, where the output is in radians.
"""
function tan_fast end
@inline function tan_fast_kernel(x::Float64)
c16 = 9.99583485362149960784268e-06
c15 = -4.31184585467324750724175e-05
c14 = 0.000103573238391744000389851
c13 = -0.000137892809714281708733524
c12 = 0.000157624358465342784274554
c11 = -6.07500301486087879295969e-05
c10 = 0.000148898734751616411290179
c9 = 0.000219040550724571513561967
c8 = 0.000595799595197098359744547
c7 = 0.00145461240472358871965441
c6 = 0.0035923150771440177410343
c5 = 0.00886321546662684547901456
c4 = 0.0218694899718446938985394
c3 = 0.0539682539049961967903002
c2 = 0.133333333334818976423364
c1 = 0.333333333333320047664472
return @horner x c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11 c12 c13 c14 c15 c16
end
@inline function tan_fast_kernel(x::Float32)
c7 = 0.00446636462584137916564941f0
c6 = -8.3920182078145444393158f-05
c5 = 0.0109639242291450500488281f0
c4 = 0.0212360303848981857299805f0
c3 = 0.0540687143802642822265625f0
c2 = 0.133325666189193725585938f0
c1 = 0.33333361148834228515625f0
return @horner x c1 c2 c3 c4 c5 c6 c7
end
@inline function tan_fast(d::T) where {T<:Float64}
qh = trunc(d * (2 * T(M_1_PI)) / (1 << 24))
ql = round(d * (2 * T(M_1_PI)) - qh * (1 << 24))
x = muladd(qh, -PI_A(T) * T(0.5) * (1 << 24), d)
x = muladd(ql, -PI_A(T) * T(0.5), x)
x = muladd(qh, -PI_B(T) * T(0.5) * (1 << 24), x)
x = muladd(ql, -PI_B(T) * T(0.5), x)
x = muladd(qh, -PI_C(T) * T(0.5) * (1 << 24), x)
x = muladd(ql, -PI_C(T) * T(0.5), x)
x = muladd(qh * (1 << 24) + ql, -PI_D(T) * T(0.5), x)
s = x * x
qli = unsafe_trunc(Int, ql)
qli & 1 != 0 && (x = -x)
u = tan_fast_kernel(s)
u = muladd(s, u * x, x)
qli & 1 != 0 && (u = T(1.0) / u)
!isinf(d) && (isnegzero(d) || abs(d) > TRIG_MAX(T)) && (u = T(-0.0))
return u
end
@inline function tan_fast(d::T) where {T<:Float32}
q = round(d * (2 * T(M_1_PI)))
x = d
x = muladd(q, -PI_A(T) * T(0.5), x)
x = muladd(q, -PI_B(T) * T(0.5), x)
x = muladd(q, -PI_C(T) * T(0.5), x)
x = muladd(q, -PI_D(T) * T(0.5), x)
s = x * x
qi = unsafe_trunc(Int, q)
qi & 1 != 0 && (x = -x)
u = tan_fast_kernel(s)
u = muladd(s, u * x, x)
qi & 1 != 0 && (u = T(1.0) / u)
!isinf(d) && (isnegzero(d) || abs(d) > TRIG_MAX(T)) && (u = T(-0.0))
return u
end
@inline function tan_kernel(x::Double{Float64})
c15 = 1.01419718511083373224408e-05
c14 = -2.59519791585924697698614e-05
c13 = 5.23388081915899855325186e-05
c12 = -3.05033014433946488225616e-05
c11 = 7.14707504084242744267497e-05
c10 = 8.09674518280159187045078e-05
c9 = 0.000244884931879331847054404
c8 = 0.000588505168743587154904506
c7 = 0.00145612788922812427978848
c6 = 0.00359208743836906619142924
c5 = 0.00886323944362401618113356
c4 = 0.0218694882853846389592078
c3 = 0.0539682539781298417636002
c2 = 0.133333333333125941821962
c1 = 0.333333333333334980164153
return dadd(c1, x.hi * (@horner x.hi c2 c3 c4 c5 c6 c7 c8 c9 c10 c11 c12 c13 c14 c15))
end
@inline function tan_kernel(x::Double{Float32})
c7 = 0.00446636462584137916564941f0
c6 = -8.3920182078145444393158f-05
c5 = 0.0109639242291450500488281f0
c4 = 0.0212360303848981857299805f0
c3 = 0.0540687143802642822265625f0
c2 = 0.133325666189193725585938f0
c1 = 0.33333361148834228515625f0
return dadd(c1, x.hi * (@horner x.hi c2 c3 c4 c5 c6 c7))
end
@inline function tan(d::T) where {T<:Float64}
qh = trunc(d * (T(M_2_PI)) / (1 << 24))
s = dadd2(dmul(Double(T(M_2_PI_H), T(M_2_PI_L)), d), (d < 0 ? T(-0.5) : T(0.5)) - qh * (1 << 24))
ql = trunc(T(s))
s = dadd2(d, qh * (-PI_A(T) * T(0.5) * (1 << 24)))
s = dadd2(s, ql * (-PI_A(T) * T(0.5) ))
s = dadd2(s, qh * (-PI_B(T) * T(0.5) * (1 << 24)))
s = dadd2(s, ql * (-PI_B(T) * T(0.5) ))
s = dadd2(s, qh * (-PI_C(T) * T(0.5) * (1 << 24)))
s = dadd2(s, ql * (-PI_C(T) * T(0.5) ))
s = dadd2(s, (qh * (1 << 24) + ql) * (-PI_D(T) * T(0.5)))
qli = unsafe_trunc(Int, ql)
qli & 1 != 0 && (s = -s)
t = s
s = dsqu(s)
u = tan_kernel(s)
x = dadd(T(1.0), dmul(u, s))
x = dmul(t, x)
qli & 1 != 0 && (x = drec(x))
v = T(x)
!isinf(d) && (isnegzero(d) || abs(d) > TRIG_MAX(T)) && (v = T(-0.0))
return v
end
@inline function tan(d::T) where {T<:Float32}
q = round(d * (T(M_2_PI)))
s = dadd2(d, q * -PI_A(T) * T(0.5))
s = dadd2(s, q * -PI_B(T) * T(0.5))
s = dadd2(s, q * -PI_C(T) * T(0.5))
s = dadd2(s, q * -PI_XD(T) * T(0.5))
s = dadd2(s, q * -PI_XE(T) * T(0.5))
qi = unsafe_trunc(Int, q)
qi & 1 != 0 && (s = -s)
t = s
s = dsqu(s)
s = dnormalize(s)
u = tan_kernel(s)
x = dadd(T(1.0), dmul(u, s))
x = dmul(t, x)
qi & 1 != 0 && (x = drec(x))
v = T(x)
!isinf(d) && (isnegzero(d) || abs(d) > TRIG_MAX(T)) && (v = T(-0.0))
return v
end
"""
atan(x)
Compute the inverse tangent of `x`, where the output is in radians.
"""
@inline function atan(x::T) where {T<:Union{Float32,Float64}}
u = T(atan2k(Double(abs(x)), Double(T(1))))
isinf(x) && (u = T(PI_2))
flipsign(u, x)
end
@inline function atan_fast_kernel(x::Float64)
c19 = -1.88796008463073496563746e-05
c18 = 0.000209850076645816976906797
c17 = -0.00110611831486672482563471
c16 = 0.00370026744188713119232403
c15 = -0.00889896195887655491740809
c14 = 0.016599329773529201970117
c13 = -0.0254517624932312641616861
c12 = 0.0337852580001353069993897
c11 = -0.0407629191276836500001934
c10 = 0.0466667150077840625632675
c9 = -0.0523674852303482457616113
c8 = 0.0587666392926673580854313
c7 = -0.0666573579361080525984562
c6 = 0.0769219538311769618355029
c5 = -0.090908995008245008229153
c4 = 0.111111105648261418443745
c3 = -0.14285714266771329383765
c2 = 0.199999999996591265594148
c1 = -0.333333333333311110369124
return @horner x c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11 c12 c13 c14 c15 c16 c17 c18 c19
end
@inline function atan_fast_kernel(x::Float32)
c8 = 0.00282363896258175373077393f0
c7 = -0.0159569028764963150024414f0
c6 = 0.0425049886107444763183594f0
c5 = -0.0748900920152664184570312f0
c4 = 0.106347933411598205566406f0
c3 = -0.142027363181114196777344f0
c2 = 0.199926957488059997558594f0
c1 = -0.333331018686294555664062f0
return @horner x c1 c2 c3 c4 c5 c6 c7 c8
end
"""
atan_fast(x)
Compute the inverse tangent of `x`, where the output is in radians.
"""
@inline function atan_fast(x::T) where {T<:Union{Float32,Float64}}
q = 0
if signbit(x)
x = -x
q = 2
end
if x > 1
x = 1 / x
q |= 1
end
t = x * x
u = atan_fast_kernel(t)
t = x + x * t * u
q & 1 != 0 && (t = T(PI_2) - t)
q & 2 != 0 && (t = -t)
return t
end
const under_atan2(::Type{Float64}) = 5.5626846462680083984e-309
const under_atan2(::Type{Float32}) = 2.9387372783541830947f-39
"""
atan(x, y)
Compute the inverse tangent of `x/y`, using the signs of both `x` and `y` to determine the quadrant of the return value.
"""
@inline function atan(x::T, y::T) where {T<:Union{Float32,Float64}}
abs(y) < under_atan2(T) && (x *= T(Int64(1) << 53); y *= T(Int64(1) << 53))
r = T(atan2k(Double(abs(x)), Double(y)))
r = flipsign(r, y)
if isinf(y) || y == 0
r = T(PI_2) - (isinf(y) ? _sign(y) * T(PI_2) : T(0.0))
end
if isinf(x)
r = T(PI_2) - (isinf(y) ? _sign(y) * T(PI_4) : T(0.0))
end
if x == 0
r = _sign(y) == -1 ? T(M_PI) : T(0.0)
end
return isnan(y) || isnan(x) ? T(NaN) : flipsign(r, x)
end
"""
atan2_fast(x, y)
Compute the inverse tangent of `x/y`, using the signs of both `x` and `y` to determine the quadrant of the return value.
"""
@inline function atan_fast(x::T, y::T) where {T<:Union{Float32,Float64}}
r = atan2k_fast(abs(x), y)
r = flipsign(r, y)
if isinf(y) || y == 0
r = T(PI_2) - (isinf(y) ? _sign(y) * T(PI_2) : T(0))
end
if isinf(x)
r = T(PI_2) - (isinf(y) ? _sign(y) * T(PI_4) : T(0))
end
if x == 0
r = _sign(y) == -1 ? T(M_PI) : T(0)
end
return isnan(y) || isnan(x) ? T(NaN) : flipsign(r, x)
end
"""
asin(x)
Compute the inverse sine of `x`, where the output is in radians.
"""
@inline function asin(x::T) where {T<:Union{Float32,Float64}}
d = atan2k(Double(abs(x)), dsqrt(dmul(dadd(T(1), x), dsub(T(1), x))))
u = T(d)
abs(x) == 1 && (u = T(PI_2))
flipsign(u, x)
end
"""
asin_fast(x)
Compute the inverse sine of `x`, where the output is in radians.
"""
@inline function asin_fast(x::T) where {T<:Union{Float32,Float64}}
flipsign(atan2k_fast(abs(x), _sqrt((1 + x) * (1 - x))), x)
end
"""
acos(x)
Compute the inverse cosine of `x`, where the output is in radians.
"""
@inline function acos(x::T) where {T<:Union{Float32,Float64}}
d = atan2k(dsqrt(dmul(dadd(T(1), x), dsub(T(1), x))), Double(abs(x)))
d = flipsign(d, x)
abs(x) == 1 && (d = Double(T(0)))
signbit(x) && (d = dadd(MDPI(T), d))
return T(d)
end
"""
acos_fast(x)
Compute the inverse cosine of `x`, where the output is in radians.
"""
@inline function acos_fast(x::T) where {T<:Union{Float32,Float64}}
flipsign(atan2k_fast(_sqrt((1 + x) * (1 - x)), abs(x)), x) + (signbit(x) ? T(M_PI) : T(0))
end
| SLEEFInline | https://github.com/AStupidBear/SLEEFInline.jl.git |
|
[
"MIT"
] | 0.1.0 | 755459c8c724437026ade8eed1032572a5236752 | code | 1449 | ## utility functions mainly used by the private math functions in priv.jl
function is_fma_fast end
for T in (Float32, Float64)
@eval is_fma_fast(::Type{$T}) = $(muladd(nextfloat(one(T)), nextfloat(one(T)), -nextfloat(one(T), 2)) != zero(T))
end
const FMA_FAST = is_fma_fast(Float64) && is_fma_fast(Float32)
@inline isnegzero(x::T) where {T<:AbstractFloat} = x === T(-0.0)
@inline ispinf(x::T) where {T<:AbstractFloat} = x == T(Inf)
@inline isninf(x::T) where {T<:AbstractFloat} = x == T(-Inf)
# _sign emits better native code than sign but does not properly handle the Inf/NaN cases
@inline _sign(d::T) where {T<:AbstractFloat} = flipsign(one(T), d)
@inline integer2float(::Type{Float64}, m::Int) = reinterpret(Float64, (m % Int64) << significand_bits(Float64))
@inline integer2float(::Type{Float32}, m::Int) = reinterpret(Float32, (m % Int32) << significand_bits(Float32))
@inline float2integer(d::Float64) = (reinterpret(Int64, d) >> significand_bits(Float64)) % Int
@inline float2integer(d::Float32) = (reinterpret(Int32, d) >> significand_bits(Float32)) % Int
@inline pow2i(::Type{T}, q::Int) where {T<:Union{Float32,Float64}} = integer2float(T, q + exponent_bias(T))
# sqrt without the domain checks which we don't need since we handle the checks ourselves
if VERSION < v"0.7-"
_sqrt(x::T) where {T<:Union{Float32,Float64}} = Base.sqrt_llvm_fast(x)
else
_sqrt(x::T) where {T<:Union{Float32,Float64}} = Base.sqrt_llvm(x)
end
| SLEEFInline | https://github.com/AStupidBear/SLEEFInline.jl.git |
|
[
"MIT"
] | 0.1.0 | 755459c8c724437026ade8eed1032572a5236752 | code | 5317 |
MRANGE(::Type{Float64}) = 10000000
MRANGE(::Type{Float32}) = 10000
IntF(::Type{Float64}) = Int64
IntF(::Type{Float32}) = Int32
@testset "Accuracy (max error in ulp) for $T" for T in (Float32, Float64)
println("Accuracy tests for $T")
xx = map(T, vcat(-10:0.0002:10, -1000:0.1:1000))
fun_table = Dict(SLEEF.exp => Base.exp)
tol = 1
test_acc(T, fun_table, xx, tol)
xx = map(T, vcat(-10:0.0002:10, -1000:0.02:1000))
fun_table = Dict(SLEEF.asinh => Base.asinh, SLEEF.atanh => Base.atanh)
tol = 1
test_acc(T, fun_table, xx, tol)
xx = map(T, vcat(1:0.0002:10, 1:0.02:1000))
fun_table = Dict(SLEEF.acosh => Base.acosh)
tol = 1
test_acc(T, fun_table, xx, tol)
xx = T[]
for i = 1:10000
s = reinterpret(T, reinterpret(IntF(T), T(pi)/4 * i) - IntF(T)(20))
e = reinterpret(T, reinterpret(IntF(T), T(pi)/4 * i) + IntF(T)(20))
d = s
while d <= e
append!(xx, d)
d = reinterpret(T, reinterpret(IntF(T), d) + IntF(T)(1))
end
end
xx = append!(xx, -10:0.0002:10)
xx = append!(xx, -MRANGE(T):200.1:MRANGE(T))
fun_table = Dict(SLEEF.sin => Base.sin, SLEEF.cos => Base.cos, SLEEF.tan => Base.tan)
tol = 1
test_acc(T, fun_table, xx, tol)
fun_table = Dict(SLEEF.sin_fast => Base.sin, SLEEF.cos_fast => Base.cos, SLEEF.tan_fast => Base.tan)
tol = 4
test_acc(T, fun_table, xx, tol)
global sin_sincos_fast(x) = (SLEEF.sincos_fast(x))[1]
global cos_sincos_fast(x) = (SLEEF.sincos_fast(x))[2]
fun_table = Dict(sin_sincos_fast => Base.sin, cos_sincos_fast => Base.cos)
tol = 4
test_acc(T, fun_table, xx, tol)
global sin_sincos(x) = (SLEEF.sincos(x))[1]
global cos_sincos(x) = (SLEEF.sincos(x))[2]
fun_table = Dict(sin_sincos => Base.sin, cos_sincos => Base.cos)
tol = 1
test_acc(T, fun_table, xx, tol)
xx = map(T, vcat(-1:0.00002:1))
fun_table = Dict(SLEEF.asin_fast => Base.asin, SLEEF.acos_fast => Base.acos)
tol = 3
test_acc(T, fun_table, xx, tol)
fun_table = Dict(SLEEF.asin => asin, SLEEF.acos => Base.acos)
tol = 1
test_acc(T, fun_table, xx, tol)
xx = map(T, vcat(-10:0.0002:10, -10000:0.2:10000, -10000:0.201:10000))
fun_table = Dict(SLEEF.atan_fast => Base.atan)
tol = 3
test_acc(T, fun_table, xx, tol)
fun_table = Dict(SLEEF.atan => Base.atan)
tol = 1
test_acc(T, fun_table, xx, tol)
xx1 = map(Tuple{T,T}, [zip(-10:0.050:10, -10:0.050:10)...])
xx2 = map(Tuple{T,T}, [zip(-10:0.051:10, -10:0.052:10)...])
xx3 = map(Tuple{T,T}, [zip(-100:0.51:100, -100:0.51:100)...])
xx4 = map(Tuple{T,T}, [zip(-100:0.51:100, -100:0.52:100)...])
xx = vcat(xx1, xx2, xx3, xx4)
fun_table = Dict(SLEEF.atan_fast => Base.atan)
tol = 2.5
test_acc(T, fun_table, xx, tol)
fun_table = Dict(SLEEF.atan => Base.atan)
tol = 1
test_acc(T, fun_table, xx, tol)
xx = map(T, vcat(0.0001:0.0001:10, 0.001:0.1:10000, 1.1.^(-1000:1000), 2.1.^(-1000:1000)))
fun_table = Dict(SLEEF.log_fast => Base.log)
tol = 3
test_acc(T, fun_table, xx, tol)
fun_table = Dict(SLEEF.log => Base.log)
tol = 1
test_acc(T, fun_table, xx, tol)
xx = map(T, vcat(0.0001:0.0001:10, 0.0001:0.1:10000))
fun_table = Dict(SLEEF.log10 => Base.log10, SLEEF.log2 => Base.log2)
tol = 1
test_acc(T, fun_table, xx, tol)
xx = map(T, vcat(0.0001:0.0001:10, 0.0001:0.1:10000, 10.0.^-(0:0.02:300), -10.0.^-(0:0.02:300)))
fun_table = Dict(SLEEF.log1p => Base.log1p)
tol = 1
test_acc(T, fun_table, xx, tol)
xx1 = map(Tuple{T,T}, [(x,y) for x = -100:0.20:100, y = 0.1:0.20:100])[:]
xx2 = map(Tuple{T,T}, [(x,y) for x = -100:0.21:100, y = 0.1:0.22:100])[:]
xx3 = map(Tuple{T,T}, [(x,y) for x = 2.1, y = -1000:0.1:1000])
xx = vcat(xx1, xx2, xx2)
fun_table = Dict(SLEEF.pow => Base.:^)
tol = 1
test_acc(T, fun_table, xx, tol)
xx = map(T, vcat(-10000:0.2:10000, 1.1.^(-1000:1000), 2.1.^(-1000:1000)))
fun_table = Dict(SLEEF.cbrt_fast => Base.cbrt)
tol = 2
test_acc(T, fun_table, xx, tol)
fun_table = Dict(SLEEF.cbrt => Base.cbrt)
tol = 1
test_acc(T, fun_table, xx, tol)
xx = map(T, vcat(-10:0.0002:10, -120:0.023:1000, -1000:0.02:2000))
fun_table = Dict(SLEEF.exp2 => Base.exp2)
tol = 1
test_acc(T, fun_table, xx, tol)
xx = map(T, vcat(-10:0.0002:10, -35:0.023:1000, -300:0.01:300))
fun_table = Dict(SLEEF.exp10 => Base.exp10)
tol = 1
test_acc(T, fun_table, xx, tol)
xx = map(T, vcat(-10:0.0002:10, -1000:0.021:1000, -1000:0.023:1000,
10.0.^-(0:0.02:300), -10.0.^-(0:0.02:300), 10.0.^(0:0.021:300), -10.0.^-(0:0.021:300)))
fun_table = Dict(SLEEF.expm1 => Base.expm1)
tol = 2
test_acc(T, fun_table, xx, tol)
xx = map(T, vcat(-10:0.0002:10, -1000:0.02:1000))
fun_table = Dict(SLEEF.sinh => Base.sinh, SLEEF.cosh => Base.cosh, SLEEF.tanh => Base.tanh)
tol = 1
test_acc(T, fun_table, xx, tol)
@testset "xilogb at arbitrary values" begin
xd = Dict{T,Int}(T(1e-30) => -100, T(2.31e-11) => -36, T(-1.0) => 0, T(1.0) => 0,
T(2.31e11) => 37, T(1e30) => 99)
for (i,j) in xd
@test SLEEF.ilogb(i) === j
end
end
end
| SLEEFInline | https://github.com/AStupidBear/SLEEFInline.jl.git |
|
[
"MIT"
] | 0.1.0 | 755459c8c724437026ade8eed1032572a5236752 | code | 10161 | @testset "exceptional $T" for T in (Float32, Float64)
@testset "exceptional $xatan" for xatan in (SLEEF.atan_fast, SLEEF.atan)
@test xatan(T(0.0), T(-0.0)) === T(pi)
@test xatan(T(-0.0), T(-0.0)) === -T(pi)
@test ispzero(xatan(T(0.0), T(0.0)))
@test isnzero(xatan(T(-0.0), T(0.0)))
@test xatan( T(Inf), -T(Inf)) === T(3*pi/4)
@test xatan(-T(Inf), -T(Inf)) === T(-3*pi/4)
@test xatan( T(Inf), T(Inf)) === T(pi/4)
@test xatan(-T(Inf), T(Inf)) === T(-pi/4)
y = T(0.0)
xa = T[-100000.5, -100000, -3, -2.5, -2, -1.5, -1.0, -0.5]
for x in xa
@test xatan(y,x) === T(pi)
end
y = T(-0.0)
xa = T[-100000.5, -100000, -3, -2.5, -2, -1.5, -1.0, -0.5]
for x in xa
@test xatan(y,x) === T(-pi)
end
ya = T[-100000.5, -100000, -3, -2.5, -2, -1.5, -1.0, -0.5]
xa = T[T(0.0), T(-0.0)]
for x in xa, y in ya
@test xatan(y,x) === T(-pi/2)
end
ya = T[100000.5, 100000, 3, 2.5, 2, 1.5, 1.0, 0.5]
xa = T[T(0.0), T(-0.0)]
for x in xa, y in ya
@test xatan(y,x) === T(pi/2)
end
y = T(Inf)
xa = T[-100000.5, -100000, -3, -2.5, -2, -1.5, -1.0, -0.5, -0.0, +0.0, 0.5, 1.5, 2.0, 2.5, 3.0, 100000, 100000.5]
for x in xa
@test xatan(y,x) === T(pi/2)
end
y = T(-Inf)
xa = T[-100000.5, -100000, -3, -2.5, -2, -1.5, -1.0, -0.5, -0.0, +0.0, 0.5, 1.5, 2.0, 2.5, 3.0, 100000, 100000.5]
for x in xa
@test xatan(y,x) === T(-pi/2)
end
ya = T[0.5, 1.5, 2.0, 2.5, 3.0, 100000, 100000.5]
x = T(Inf)
for y in ya
@test ispzero(xatan(y,x))
end
ya = T[-0.5, -1.5, -2.0, -2.5, -3.0, -100000, -100000.5]
x = T(Inf)
for y in ya
@test isnzero(xatan(y,x))
end
ya = T[-100000.5, -100000, -3, -2.5, -2, -1.5, -1.0, -0.5, -0.0, +0.0, 0.5, 1.5, 2.0, 2.5, 3.0, 100000, 100000.5, NaN]
x = T(NaN)
for y in ya
@test isnan(xatan(y,x))
end
y = T(NaN)
xa = T[-100000.5, -100000, -3, -2.5, -2, -1.5, -1.0, -0.5, -0.0, +0.0, 0.5, 1.5, 2.0, 2.5, 3.0, 100000, 100000.5, NaN]
for x in xa
@test isnan(xatan(y,x))
end
end # denormal/nonumber atan
@testset "exceptional xpow" begin
@test SLEEF.pow(T(1), T(NaN)) === T(1)
@test SLEEF.pow( T(NaN), T(0)) === T(1)
@test SLEEF.pow(-T(1), T(Inf)) === T(1)
@test SLEEF.pow(-T(1), T(-Inf)) === T(1)
xa = T[-100000.5, -100000, -3, -2.5, -2, -1.5, -1.0, -0.5]
ya = T[-100000.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 100000.5]
for x in xa, y in ya
@test isnan(SLEEF.pow(x,y))
end
x = T(NaN)
ya = T[-100000.5, -100000, -3, -2.5, -2, -1.5, -1.0, -0.5, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 100000, 100000.5]
for y in ya
@test isnan(SLEEF.pow(x,y))
end
xa = T[-100000.5, -100000, -3, -2.5, -2, -1.5, -1.0, -0.5, -0.0, +0.0, 0.5, 1.5, 2.0, 2.5, 3.0, 100000, 100000.5]
y = T(NaN)
for x in xa
@test isnan(SLEEF.pow(x,y))
end
x = T(0.0)
ya = T[1, 3, 5, 7, 100001]
for y in ya
@test ispzero(SLEEF.pow(x,y))
end
x = T(-0.0)
ya = T[1, 3, 5, 7, 100001]
for y in ya
@test isnzero(SLEEF.pow(x,y))
end
xa = T[0.0, -0.0]
ya = T[0.5, 1.5, 2.0, 2.5, 4.0, 100000, 100000.5]
for x in xa, y in ya
@test ispzero(SLEEF.pow(x,y))
end
xa = T[-0.999, -0.5, -0.0, +0.0, +0.5, +0.999]
y = T(-Inf)
for x in xa
@test SLEEF.pow(x,y) === T(Inf)
end
xa = T[-100000.5, -100000, -3, -2.5, -2, -1.5, 1.5, 2.0, 2.5, 3.0, 100000, 100000.5]
y = T(-Inf)
for x in xa
@test ispzero(SLEEF.pow(x,y))
end
xa = T[-0.999, -0.5, -0.0, +0.0, +0.5, +0.999]
y = T(Inf)
for x in xa
@test ispzero(SLEEF.pow(x,y))
end
xa = T[-100000.5, -100000, -3, -2.5, -2, -1.5, 1.5, 2.0, 2.5, 3.0, 100000, 100000.5]
y = T(Inf)
for x in xa
@test SLEEF.pow(x,y) === T(Inf)
end
x = T(-Inf)
ya = T[-100001, -5, -3, -1]
for y in ya
@test isnzero(SLEEF.pow(x,y))
end
x = T(-Inf)
ya = T[-100000.5, -100000, -4, -2.5, -2, -1.5, -0.5]
for y in ya
@test ispzero(SLEEF.pow(x,y))
end
x = T(-Inf)
ya = T[1, 3, 5, 7, 100001]
for y in ya
@test SLEEF.pow(x,y) === T(-Inf)
end
x = T(-Inf)
ya = T[0.5, 1.5, 2, 2.5, 3.5, 4, 100000, 100000.5]
for y in ya
@test SLEEF.pow(x,y) === T(Inf)
end
x = T(Inf)
ya = T[-100000.5, -100000, -3, -2.5, -2, -1.5, -1.0, -0.5]
for y in ya
@test ispzero(SLEEF.pow(x,y))
end
x = T(Inf)
ya = T[0.5, 1, 1.5, 2.0, 2.5, 3.0, 100000, 100000.5]
for y in ya
@test SLEEF.pow(x,y) === T(Inf)
end
x = T(0.0)
ya = T[-100001, -5, -3, -1]
for y in ya
@test SLEEF.pow(x,y) === T(Inf)
end
x = T(-0.0)
ya = T[-100001, -5, -3, -1]
for y in ya
@test SLEEF.pow(x,y) === T(-Inf)
end
xa = T[0.0, -0.0]
ya = T[-100000.5, -100000, -4, -2.5, -2, -1.5, -0.5]
for x in xa, y in ya
@test SLEEF.pow(x,y) === T(Inf)
end
xa = T[1000, -1000]
ya = T[1000, 1000.5, 1001]
for x in xa, y in ya
@test cmpdenorm(SLEEF.pow(x,y), Base.:^(BigFloat(x),BigFloat(y)))
end
end # denormal/nonumber pow
fun_table = Dict(SLEEF.sin_fast => Base.sin, SLEEF.sin => Base.sin)
@testset "exceptional $xtrig" for (xtrig, trig) in fun_table
xa = T[NaN, -0.0, 0.0, Inf, -Inf]
for x in xa
@test cmpdenorm(xtrig(x), trig(BigFloat(x)))
end
end
fun_table = Dict(SLEEF.cos_fast => Base.cos, SLEEF.cos => Base.cos)
@testset "exceptional $xtrig" for (xtrig, trig) in fun_table
xa = T[NaN, -0.0, 0.0, Inf, -Inf]
for x in xa
@test cmpdenorm(xtrig(x), trig(BigFloat(x)))
end
end
@testset "exceptional sin in $xsincos"for xsincos in (SLEEF.sincos_fast, SLEEF.sincos)
xa = T[NaN, -0.0, 0.0, Inf, -Inf]
for x in xa
q = xsincos(x)[1]
@test cmpdenorm(q, Base.sin(BigFloat(x)))
end
end
@testset "exceptional cos in $xsincos"for xsincos in (SLEEF.sincos_fast, SLEEF.sincos)
xa = T[NaN, -0.0, 0.0, Inf, -Inf]
for x in xa
q = xsincos(x)[2]
@test cmpdenorm(q, Base.cos(BigFloat(x)))
end
end
@testset "exceptional $xtan" for xtan in (SLEEF.tan_fast, SLEEF.tan)
xa = T[NaN, Inf, -Inf, -0.0, 0.0, pi/2, -pi/2]
for x in xa
@test cmpdenorm(xtan(x), Base.tan(BigFloat(x)))
end
end
fun_table = Dict(SLEEF.asin => Base.asin, SLEEF.asin_fast => Base.asin, SLEEF.acos => Base.acos, SLEEF.acos_fast => Base.acos)
@testset "exceptional $xatrig" for (xatrig, atrig) in fun_table
xa = T[NaN, Inf, -Inf, 2, -2, 1, -1, -0.0, 0.0]
for x in xa
@test cmpdenorm(xatrig(x), atrig(BigFloat(x)))
end
end
@testset "exceptional $xatan" for xatan in (SLEEF.atan, SLEEF.atan_fast)
xa = T[NaN, Inf, -Inf, -0.0, 0.0]
for x in xa
@test cmpdenorm(xatan(x), Base.atan(BigFloat(x)))
end
end
@testset "exceptional exp" begin
xa = T[NaN, Inf, -Inf, 10000, -10000]
for x in xa
@test cmpdenorm(SLEEF.exp(x), Base.exp(BigFloat(x)))
end
end
@testset "exceptional sinh" begin
xa = T[NaN, 0.0, -0.0, Inf, -Inf, 10000, -10000]
for x in xa
@test cmpdenorm(SLEEF.sinh(x), Base.sinh(BigFloat(x)))
end
end
@testset "exceptional cosh" begin
xa = T[NaN, 0.0, -0.0, Inf, -Inf, 10000, -10000]
for x in xa
@test cmpdenorm(SLEEF.cosh(x), Base.cosh(BigFloat(x)))
end
end
@testset "exceptional tanh" begin
xa = T[NaN, 0.0, -0.0, Inf, -Inf, 10000, -10000]
for x in xa
@test cmpdenorm(SLEEF.tanh(x), Base.tanh(BigFloat(x)))
end
end
@testset "exceptional asinh" begin
xa = T[NaN, 0.0, -0.0, Inf, -Inf, 10000, -10000]
for x in xa
@test cmpdenorm(SLEEF.asinh(x), Base.asinh(BigFloat(x)))
end
end
@testset "exceptional acosh" begin
xa = T[NaN, 0.0, -0.0, 1.0, Inf, -Inf, 10000, -10000]
for x in xa
@test cmpdenorm(SLEEF.acosh(x), Base.acosh(BigFloat(x)))
end
end
@testset "exceptional atanh" begin
xa = T[NaN, 0.0, -0.0, 1.0, -1.0, Inf, -Inf, 10000, -10000]
for x in xa
@test cmpdenorm(SLEEF.atanh(x), Base.atanh(BigFloat(x)))
end
end
@testset "exceptional $xcbrt" for xcbrt = (SLEEF.cbrt, SLEEF.cbrt_fast)
xa = T[NaN, Inf, -Inf, 0.0, -0.0]
for x in xa
@test cmpdenorm(SLEEF.cbrt(x), Base.cbrt(BigFloat(x)))
end
end
@testset "exceptional exp2" begin
xa = T[NaN, Inf, -Inf]
for x in xa
@test cmpdenorm(SLEEF.exp2(x), Base.exp2(BigFloat(x)))
end
end
@testset "exceptional exp10" begin
xa = T[NaN, Inf, -Inf]
for x in xa
@test cmpdenorm(SLEEF.exp10(x), Base.exp10(BigFloat(x)))
end
end
@testset "exceptional expm1" begin
xa = T[NaN, Inf, -Inf, 0.0, -0.0]
for x in xa
@test cmpdenorm(SLEEF.expm1(x), Base.expm1(BigFloat(x)))
end
end
@testset "exceptional $xlog" for xlog in (SLEEF.log, SLEEF.log_fast)
xa = T[NaN, Inf, -Inf, 0, -1]
for x in xa
@test cmpdenorm(xlog(x), Base.log(BigFloat(x)))
end
end
@testset "exceptional log10" begin
xa = T[NaN, Inf, -Inf, 0, -1]
for x in xa
@test cmpdenorm(SLEEF.log10(x), Base.log10(BigFloat(x)))
end
end
@testset "exceptional log2" begin
xa = T[NaN, Inf, -Inf, 0, -1]
for x in xa
@test cmpdenorm(SLEEF.log2(x), Base.log2(BigFloat(x)))
end
end
@testset "exceptional log1p" begin
xa = T[NaN, Inf, -Inf, 0.0, -0.0, -1.0, -2.0]
for x in xa
@test cmpdenorm(SLEEF.log1p(x), Base.log1p(BigFloat(x)))
end
end
@testset "exceptional ldexp" begin
for i = -10000:10000
a = SLEEF.ldexp(T(1.0), i)
b = Base.ldexp(BigFloat(1.0), i)
@test (isfinite(b) && a == b || cmpdenorm(a,b))
end
end
@testset "exceptional ilogb" begin
@test SLEEF.ilogb(+T(Inf)) == SLEEF.INT_MAX
@test SLEEF.ilogb(-T(Inf)) == SLEEF.INT_MAX
@test SLEEF.ilogb(+T(0.0)) == SLEEF.FP_ILOGB0
@test SLEEF.ilogb(-T(0.0)) == SLEEF.FP_ILOGB0
@test SLEEF.ilogb( T(NaN)) == SLEEF.FP_ILOGBNAN
end
end #exceptional
| SLEEFInline | https://github.com/AStupidBear/SLEEFInline.jl.git |
|
[
"MIT"
] | 0.1.0 | 755459c8c724437026ade8eed1032572a5236752 | code | 4306 | using SLEEFInline
using Test
using Printf
using Base.Math: significand_bits
isnzero(x::T) where {T <: AbstractFloat} = signbit(x)
ispzero(x::T) where {T <: AbstractFloat} = !signbit(x)
function cmpdenorm(x::Tx, y::Ty) where {Tx <: AbstractFloat, Ty <: AbstractFloat}
sizeof(Tx) < sizeof(Ty) ? y = Tx(y) : x = Ty(x) # cast larger type to smaller type
(isnan(x) && isnan(y)) && return true
(isnan(x) || isnan(y)) && return false
(isinf(x) != isinf(y)) && return false
(x == Tx(Inf) && y == Ty(Inf)) && return true
(x == Tx(-Inf) && y == Ty(-Inf)) && return true
if y == 0
(ispzero(x) && ispzero(y)) && return true
(isnzero(x) && isnzero(y)) && return true
return false
end
(!isnan(x) && !isnan(y) && !isinf(x) && !isinf(y)) && return sign(x) == sign(y)
return false
end
# the following compares the ulp between x and y.
# First it promotes them to the larger of the two types x,y
const infh(::Type{Float64}) = 1e300
const infh(::Type{Float32}) = 1e37
function countulp(T, x::AbstractFloat, y::AbstractFloat)
X, Y = promote(x, y)
x, y = T(X), T(Y) # Cast to smaller type
(isnan(x) && isnan(y)) && return 0
(isnan(x) || isnan(y)) && return 10000
if isinf(x)
(sign(x) == sign(y) && abs(y) > infh(T)) && return 0 # relaxed infinity handling
return 10001
end
(x == Inf && y == Inf) && return 0
(x == -Inf && y == -Inf) && return 0
if y == 0
x == 0 && return 0
return 10002
end
if isfinite(x) && isfinite(y)
return T(abs(X - Y) / ulp(y))
end
return 10003
end
const DENORMAL_MIN(::Type{Float64}) = 2.0^-1074
const DENORMAL_MIN(::Type{Float32}) = 2f0^-149
function ulp(x::T) where {T<:AbstractFloat}
x = abs(x)
x == T(0.0) && return DENORMAL_MIN(T)
val, e = frexp(x)
return max(ldexp(T(1.0), e - significand_bits(T) - 1), DENORMAL_MIN(T))
end
countulp(x::T, y::T) where {T <: AbstractFloat} = countulp(T, x, y)
# get rid off annoying warnings from overwritten function
macro nowarn(expr)
quote
_stderr = stderr
tmp = tempname()
stream = open(tmp, "w")
redirect_stderr(stream)
result = $(esc(expr))
redirect_stderr(_stderr)
close(stream)
result
end
end
# overide domain checking that base adheres to
using Base.MPFR: ROUNDING_MODE
for f in (:sin, :cos, :tan, :asin, :acos, :atan, :asinh, :acosh, :atanh, :log, :log10, :log2, :log1p)
@eval begin
import Base.$f
@nowarn function ($f)(x::BigFloat)
z = BigFloat()
ccall($(string(:mpfr_, f), :libmpfr), Int32, (Ref{BigFloat}, Ref{BigFloat}, Int32), z, x, ROUNDING_MODE[])
return z
end
end
end
strip_module_name(f::Function) = last(split(string(f), '.')) # strip module name from function f
# test the accuracy of a function where fun_table is a Dict mapping the function you want
# to test to a reference function
# xx is an array of values (which may be tuples for multiple arugment functions)
# tol is the acceptable tolerance to test against
function test_acc(T, fun_table, xx, tol; debug = false, tol_debug = 5)
@testset "accuracy $(strip_module_name(xfun))" for (xfun, fun) in fun_table
rmax = 0.0
rmean = 0.0
xmax = map(zero, first(xx))
for x in xx
q = xfun(x...)
c = fun(map(BigFloat, x)...)
u = countulp(T, q, c)
rmax = max(rmax, u)
xmax = rmax == u ? x : xmax
rmean += u
if debug && u > tol_debug
@printf("%s = %.20g\n%s = %.20g\nx = %.20g\nulp = %g\n", strip_module_name(xfun), q, strip_module_name(fun), T(c), x, ulp(T(c)))
end
end
rmean = rmean / length(xx)
t = @test trunc(rmax, digits=1) <= tol
fmtxloc = isa(xmax, Tuple) ? string('(', join((@sprintf("%.5f", x) for x in xmax), ", "), ')') : @sprintf("%.5f", xmax)
println(rpad(strip_module_name(xfun), 18, " "), ": max ", @sprintf("%f", rmax),
rpad(" at x = " * fmtxloc, 40, " "),
": mean ", @sprintf("%f", rmean))
end
end
function runtests()
@testset "SLEEF" begin
include("exceptional.jl")
include("accuracy.jl")
end
end
runtests()
| SLEEFInline | https://github.com/AStupidBear/SLEEFInline.jl.git |
|
[
"MIT"
] | 0.1.0 | 755459c8c724437026ade8eed1032572a5236752 | docs | 2015 | <div align="center"> <img
src="https://rawgit.com/musm/SLEEF.jl/master/doc/src/assets/logo.svg"
alt="SLEEF Logo" width="380"></img> </div>
A pure Julia port of the [SLEEF math library](https://github.com/shibatch/SLEEF)
**History**
- Release [v0.4.0](https://github.com/musm/SLEEF.jl/releases/tag/v0.4.0) based on SLEEF v2.110
- Release [v0.3.0](https://github.com/musm/SLEEF.jl/releases/tag/v0.3.0) based on SLEEF v2.100
- Release [v0.2.0](https://github.com/musm/SLEEF.jl/releases/tag/v0.2.0) based on SLEEF v2.90
- Release [v0.1.0](https://github.com/musm/SLEEF.jl/releases/tag/v0.1.0) based on SLEEF v2.80
<br><br>
[](https://travis-ci.org/musm/SLEEF.jl)
[](https://ci.appveyor.com/project/musm/SLEEF-jl/branch/master)
[](https://coveralls.io/github/musm/SLEEF.jl?branch=master)
[](http://codecov.io/github/musm/SLEEF.jl?branch=master)
# Usage
To use `SLEEF.jl`
```julia
pkg> add SLEEF
julia> using SLEEF
julia> SLEEF.exp(3.0)
20.085536923187668
julia> SLEEF.exp(3f0)
20.085537f0
```
The available functions include (within 1 ulp)
```julia
sin, cos, tan, asin, acos, atan, sincos, sinh, cosh, tanh,
asinh, acosh, atanh, log, log2, log10, log1p, ilogb, exp, exp2, exp10, expm1, ldexp, cbrt, pow
```
Faster variants (within 3 ulp)
```julia
sin_fast, cos_fast, tan_fast, sincos_fast, asin_fast, acos_fast, atan_fast, atan2_fast, log_fast, cbrt_fast
```
## Notes
The trigonometric functions are tested to return values with specified
accuracy when the argument is within the following range:
- Double (Float64) precision trigonometric functions : `[-1e+14, 1e+14]`
- Single (Float32) precision trigonometric functions : `[-39000, 39000]`
| SLEEFInline | https://github.com/AStupidBear/SLEEFInline.jl.git |
|
[
"MIT"
] | 0.1.4 | 10b3eb40cd8603a0484af111385dca990e204437 | code | 5267 | ### A Pluto.jl notebook ###
# v0.19.22
using Markdown
using InteractiveUtils
# ╔═╡ cedad242-4983-11eb-2f32-3d405f151b77
begin
using DiffusionMap
using ManifoldLearning, CairoMakie, Statistics, JLD2, Random, StatsBase, LinearAlgebra, ColorSchemes, PlutoUI, CSV
end
# ╔═╡ 1996b541-e783-4b3a-9f7a-aca380d1047f
TableOfContents()
# ╔═╡ f306c9dc-cfd5-4fe0-b4cb-9a9d2da7281c
import AlgebraOfGraphics as aog
# ╔═╡ bad179ee-cef7-444b-b7cb-15053ac62959
begin
aog.set_aog_theme!(fonts=[aog.firasans("Light"), aog.firasans("Light")])
update_theme!(
fontsize=20,
linewidth=4,
markersize=14,
titlefont=aog.firasans("Light"),
resolution=(500, 380)
)
end
# ╔═╡ 61f9fccf-65af-45de-b232-f32506fdb75f
md"# viz Gaussian kernel"
# ╔═╡ 9b487b40-63c1-4a9b-b1f1-ef7703dd4dcc
kernel(xᵢ, xⱼ) = gaussian_kernel(xᵢ, xⱼ, 0.5)
# ╔═╡ 3c5dd85c-d21c-43f9-a0f6-fe3873a5268e
function viz_gaussian_kernel()
fig = Figure()
ax = Axis(fig[1, 1], xlabel="||xᵢ-xⱼ|| / ℓ²", ylabel="k(xᵢ, xⱼ)")
vlines!(0.0, color="lightgray", linewidth=1)
hlines!(0.0, color="lightgray", linewidth=1)
rs = range(0.0, 4.0, length=100)
lines!(rs, exp.(-rs.^2))
fig
end
# ╔═╡ fc635aaf-54cb-43e9-b338-8c0f9510b251
md"## helpers"
# ╔═╡ c64b57a4-ac19-461b-ba0f-4a3bfe583db2
function viz_data(X::Matrix{Float64}, name::String)
fig = Figure()
ax = Axis(fig[1, 1], xlabel="x₁", ylabel="x₂", aspect=DataAspect())
scatter!(X[1, :], X[2, :], color="black")
save("raw_data_$name.pdf", fig)
fig
end
# ╔═╡ 659956e9-aaa6-437f-85c3-c25e01457a28
function viz_graph(X::Matrix{Float64}, kernel::Function, name::String)
K = pairwise(kernel, eachcol(X), symmetric=true)
fig = Figure()
ax = Axis(fig[1, 1], xlabel="x₁", ylabel="x₂", aspect=DataAspect())
for i = 1:size(X)[2]
for j = (i+1):size(X)[2]
w = K[i, j]
lines!([X[1, i], X[1, j]], [X[2, i], X[2, j]],
linewidth=0.5,
color=(get(reverse(ColorSchemes.grays), w), w)
)
end
end
scatter!(X[1, :], X[2, :], color="black")
save("graph_rep_$name.pdf", fig)
fig
end
# ╔═╡ 310ba8ae-7443-44a5-b77d-168336af39bd
function color_points(X::Matrix{Float64}, x̂::Vector{Float64}, name::String)
cmap = ColorSchemes.terrain
fig = Figure()
ax = Axis(fig[1, 1],
xlabel="x₁",
ylabel="x₂",
title=name,
aspect=DataAspect()
)
sp = scatter!(X[1, :], X[2, :], color=x̂, colormap=cmap, strokewidth=1)
Colorbar(fig[1, 2], sp, label="latent dim")
save("dim_reduction_$name.pdf", fig)
fig
end
# ╔═╡ 8e01ddd4-27c9-4f5c-8d7f-abd4f2be4f82
md"# S-curve
### generate data.
"
# ╔═╡ c793e6b0-4983-11eb-29ae-3366c7d31e84
begin
nb_data = 125
_X, _ = ManifoldLearning.scurve(nb_data, 0.1)
x₁ = _X[3, :]
x₂ = _X[1, :]
X = collect(hcat(x₁, x₂)')
end
# ╔═╡ 3172a389-10c7-45e0-a556-a8c8f1465418
viz_data(X, "S_curve")
# ╔═╡ ecd00826-6e79-47b1-ab60-46ac3e1bfef9
md"### translate data to graph"
# ╔═╡ 48aa38c3-9876-4f7c-8e4f-e22dec5104fb
viz_gaussian_kernel()
# ╔═╡ a064c35e-0e4f-418e-a329-e747daae264e
viz_graph(X, kernel, "S_curve")
# ╔═╡ 17a20c5b-7da4-4982-9b7a-21ba3c3b8060
md"### diff map (success)"
# ╔═╡ 7fbe0950-27de-4969-9962-16080e6908ff
x̂ = diffusion_map(X, kernel, 1)[:]
# ╔═╡ 4311f687-824c-42d4-b0a0-b08945460e76
color_points(X, x̂, "diff map")
# ╔═╡ 1349c4d0-d6af-4154-b139-defdaec008a1
md"### PCA (fails)"
# ╔═╡ 1ec637b9-2279-4b54-b5a5-08445f229ab4
x̂_pca = pca(collect(X), 1)[:]
# ╔═╡ 2aaea099-6f33-4260-bf40-f82644e24eae
color_points(X, x̂_pca, "PCA")
# ╔═╡ bc368089-d8f8-43e6-8933-344f71ab66d7
md"# swiss roll"
# ╔═╡ 5f986a24-784b-4259-a5d7-4f6a86b5e6fd
_X_roll, _ = ManifoldLearning.swiss_roll(125, 0.3)
# ╔═╡ d8863ed7-8430-4135-a3a2-3efb8f54d1c0
X_roll = _X_roll[[1, 3], :] # make 2D
# ╔═╡ b682a6af-8b0b-41fc-914c-55f871dbdf4f
roll_kernel(xᵢ, xⱼ) = gaussian_kernel(xᵢ, xⱼ, 2.0)
# ╔═╡ 13ff129b-c936-451d-9908-80e840103450
x̂_roll = diffusion_map(X_roll, roll_kernel, 1)[:]
# ╔═╡ 62bf2efc-9614-4fbf-bb42-29e33f7d34a1
viz_graph(X_roll, roll_kernel, "roll")
# ╔═╡ 8fb74c20-eebb-40d5-a3aa-da192398bcf3
color_points(X_roll, x̂_roll, "diff map roll")
# ╔═╡ Cell order:
# ╠═cedad242-4983-11eb-2f32-3d405f151b77
# ╠═1996b541-e783-4b3a-9f7a-aca380d1047f
# ╠═f306c9dc-cfd5-4fe0-b4cb-9a9d2da7281c
# ╠═bad179ee-cef7-444b-b7cb-15053ac62959
# ╟─61f9fccf-65af-45de-b232-f32506fdb75f
# ╠═9b487b40-63c1-4a9b-b1f1-ef7703dd4dcc
# ╠═3c5dd85c-d21c-43f9-a0f6-fe3873a5268e
# ╟─fc635aaf-54cb-43e9-b338-8c0f9510b251
# ╠═c64b57a4-ac19-461b-ba0f-4a3bfe583db2
# ╠═659956e9-aaa6-437f-85c3-c25e01457a28
# ╠═310ba8ae-7443-44a5-b77d-168336af39bd
# ╟─8e01ddd4-27c9-4f5c-8d7f-abd4f2be4f82
# ╠═c793e6b0-4983-11eb-29ae-3366c7d31e84
# ╠═3172a389-10c7-45e0-a556-a8c8f1465418
# ╟─ecd00826-6e79-47b1-ab60-46ac3e1bfef9
# ╠═48aa38c3-9876-4f7c-8e4f-e22dec5104fb
# ╠═a064c35e-0e4f-418e-a329-e747daae264e
# ╟─17a20c5b-7da4-4982-9b7a-21ba3c3b8060
# ╠═7fbe0950-27de-4969-9962-16080e6908ff
# ╠═4311f687-824c-42d4-b0a0-b08945460e76
# ╟─1349c4d0-d6af-4154-b139-defdaec008a1
# ╠═1ec637b9-2279-4b54-b5a5-08445f229ab4
# ╠═2aaea099-6f33-4260-bf40-f82644e24eae
# ╟─bc368089-d8f8-43e6-8933-344f71ab66d7
# ╠═5f986a24-784b-4259-a5d7-4f6a86b5e6fd
# ╠═d8863ed7-8430-4135-a3a2-3efb8f54d1c0
# ╠═b682a6af-8b0b-41fc-914c-55f871dbdf4f
# ╠═13ff129b-c936-451d-9908-80e840103450
# ╠═62bf2efc-9614-4fbf-bb42-29e33f7d34a1
# ╠═8fb74c20-eebb-40d5-a3aa-da192398bcf3
| DiffusionMap | https://github.com/SimonEnsemble/DiffusionMap.jl.git |
|
[
"MIT"
] | 0.1.4 | 10b3eb40cd8603a0484af111385dca990e204437 | code | 3389 | module DiffusionMap
using LinearAlgebra, StatsBase
export normalize_to_stochastic_matrix!, diffusion_map, gaussian_kernel, pca
const BANNER = String(read(joinpath(dirname(pathof(DiffusionMap)), "banner.txt")))
banner() = println(BANNER)
function gaussian_kernel(xⱼ, xᵢ, ℓ::Real)
r² = sum((xᵢ - xⱼ) .^ 2)
return exp(-r² / ℓ ^ 2)
end
"""
normalize_to_stochastic_matrix!(P, check_symmetry=true)
normalize a kernel matrix `P` so that rows sum to one.
checks for symmetry.
"""
function normalize_to_stochastic_matrix!(P::Matrix{<: Real}; check_symmetry::Bool=true)
if check_symmetry && !issymmetric(P)
error("kernel matrix not symmetric!")
end
# make sure rows sum to one
# this is equivalent to P = D⁻¹ * P
# with > d = vec(sum(P, dims=1))
# > D⁻¹ = diagm(1 ./ d)
for i = 1:size(P)[1]
P[i, :] ./= sum(P[i, :])
end
end
"""
diffusion_map(P, d; t=1)
diffusion_map(X, kernel, d; t=1)
compute diffusion map.
two call signatures:
* the data matrix `X` is passed in. examples are in the columns.
* the right-stochastic matrix `P` is passed in. (eg. for a precomputed kernel matrix)
# arguments
* `d`: dim
* `t`: # of steps
# example
```julia
# define kernel
kernel(xᵢ, xⱼ) = gaussian_kernel(xᵢ, xⱼ, 0.5)
# data matrix (100 data pts, 2D vectors)
X = rand(2, 100)
# diffusion map to 1D
X̂ = diff_map(X, kernel, 1)
```
"""
function diffusion_map(P::Matrix{<: Real}, d::Int; t::Int=1)::Matrix{Float64}
if size(P)[1] ≠ size(P)[2]
error("P is not square.")
end
if ! all(sum.(eachrow(P)) .≈ 1.0)
error("P is not right-stochastic. call `normalize_to_stochastic_matrix!` first.")
end
if ! all(P .>= 0.0)
error("P contains negative values.")
end
# eigen-decomposition of the stochastic matrix
eigen_decomp = eigen(P)
eigenvalues = eigen_decomp.values
@assert (maximum(abs.(eigenvalues)) - 1.0) < 0.0001 "largest eigenvalue should be 1.0"
# eigenvalues should all be real numbers, but numerical imprecision can promote
# the results to "complex" numbers with imaginary components of 0
for (i, ev) in enumerate(eigenvalues)
if isa(ev, Complex)
@assert isapprox(imag(ev), 0; atol=1e-6)
eigenvalues[i] = real(ev)
end
end
# sort eigenvalues, highest to lowest
# skip the first eigenvalue
idx = sortperm(Float64.(eigenvalues), rev=true)[2:end]
# get first d eigenvalues and vectors. scale eigenvectors.
λs = eigenvalues[idx][1:d]
Vs = eigen_decomp.vectors[:, idx][:, 1:d] * diagm(λs .^ t)
return Vs
end
function diffusion_map(X::Matrix{<: Real}, kernel::Function,
d::Int; t::Int=1, verbose::Bool=true)
if verbose
println("# features: ", size(X)[1])
println("# examples: ", size(X)[2])
end
# compute Laplacian matrix
P = pairwise(kernel, eachcol(X), symmetric=true)
normalize_to_stochastic_matrix!(P)
return diffusion_map(P, d; t=t)
end
function pca(X::Matrix{<: Real}, d::Int; verbose::Bool=true)
if verbose
println("# features: ", size(X)[1])
println("# examples: ", size(X)[2])
end
# center
X̂ = deepcopy(X)
for f = 1:size(X)[1]
X̂[f, :] = X[f, :] .- mean(X[f, :])
end
the_svd = svd(X̂)
return the_svd.V[:, 1:d] * diagm(the_svd.S[1:d])
end
end
| DiffusionMap | https://github.com/SimonEnsemble/DiffusionMap.jl.git |
|
[
"MIT"
] | 0.1.4 | 10b3eb40cd8603a0484af111385dca990e204437 | code | 418 | module Test_aqua
using DiffusionMap
import Aqua.test_all
# ambiguity testing finds many "problems" outside the scope of this package
ambiguities = false
# to skip when checking for stale dependencies and missing compat entries
# Aqua is added in a separate CI job, so (ironically) does not work w/ itself
stale_deps = (ignore=[:Aqua],)
test_all(DiffusionMap; ambiguities=ambiguities, stale_deps=stale_deps)
end
| DiffusionMap | https://github.com/SimonEnsemble/DiffusionMap.jl.git |
|
[
"MIT"
] | 0.1.4 | 10b3eb40cd8603a0484af111385dca990e204437 | code | 1857 | module Test_DiffusionMap
using DiffusionMap, IOCapture, LinearAlgebra, Test
DiffusionMap.banner()
@testset "normalize_to_stochastic_matrix!" begin
# generate random symmetric matrix
P = rand(20, 20)
P = P + P'
# normalize
normalize_to_stochastic_matrix!(P)
# test rows sum to 1
@test all(sum.(eachrow(P)) .≈ 1)
# make an assymmetric matrix
P = P + rand(20, 20)
# test asymmetry is detected
@test_throws ErrorException normalize_to_stochastic_matrix!(P)
end
@testset "diffusion_map" verbose=true begin
# function signature 1: diffusion_map(P, d; t=1)
@testset "matrix P" begin
# test input validation: non-square matrix P
@test_throws ErrorException diffusion_map(rand(20, 10), 2)
# test input validation: rows of P don't sum to 1
@test_throws ErrorException diffusion_map(rand(20, 20), 2)
# test input validation: negative values in P
@test_throws ErrorException diffusion_map(rand(20, 20) - rand(20, 20), 2)
# test a trivial case
D = zeros(10, 2)
D[2, 1] = D[3, 2] = 1
@test diffusion_map(diagm(ones(10)), 2) == D
end
# function signature 2: diffusion_map(X, kernel, d, t=1)
@testset "matrix X and kernel" begin
kernel(xᵢ, xⱼ) = gaussian_kernel(xᵢ, xⱼ, 0.5)
result = IOCapture.capture() do
return sum(diffusion_map(ones(20, 20), kernel, 2))
end
@test isapprox(result.value, 0; atol=1e-9)
end
end
@testset "pca" begin
# test a trivial case
result = IOCapture.capture() do
return pca(ones(10, 10), 2)
end
@test result.value == zeros(10, 2)
end
@testset "example.jl" begin
@info "Running example notebook (may take a minute or so)"
IOCapture.capture() do
include("../example/example.jl")
end
@test true
end
end
| DiffusionMap | https://github.com/SimonEnsemble/DiffusionMap.jl.git |
|
[
"MIT"
] | 0.1.4 | 10b3eb40cd8603a0484af111385dca990e204437 | docs | 537 | # diffusion maps

diffusion maps for non-linear dimensionality reduction / manifold learning.
### references
Coifman RR, Lafon S. Diffusion maps. _Applied and computational harmonic analysis_. 2006
J. de la Porte, B. M. Herbst, W. Hereman, S. J. van der Wal. An Introduction to Diffusion Maps. _Proceedings of the 19th symposium of the pattern recognition association of South Africa_. 2008
Tianlin Liu. A detailed derivation of the diffusion map. [link](http://tianlinliu.com/blog/2021/05/29/difussion-maps.html)
| DiffusionMap | https://github.com/SimonEnsemble/DiffusionMap.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 687 | using BenchmarkTools
# Benchmark suite modules
const SUITE_MODULES = Dict(
"gcp" => :BenchmarkGCP,
"mttkrp" => :BenchmarkMTTKRP,
"mttkrp-large" => :BenchmarkMTTKRPLarge,
"khatrirao" => :BenchmarkKhatriRao,
"leastsquares" => :BenchmarkLeastSquares,
)
# Create top-level suite including only sub-suites
# specified by ENV variable "GCP_BENCHMARK_SUITES"
const SUITE = BenchmarkGroup()
SELECTED_SUITES = split(get(ENV, "GCP_BENCHMARK_SUITES", join(keys(SUITE_MODULES), ' ')))
for suite_name in SELECTED_SUITES
module_name = SUITE_MODULES[suite_name]
include(joinpath(@__DIR__, "suites", "$(suite_name).jl"))
SUITE[suite_name] = eval(module_name).SUITE
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 2666 | # Script to run benchmarks and export a report
#
# To run this script from the package root directory:
# > julia benchmark/run.jl
#
# By default it will run all of the benchmark suites.
# To select a subset, pass them in as a command line arg:
# > julia benchmark/run.jl --suite mttkrp
#
# To compare against a previous commit, pass the commit git id as a command line arg:
# > julia benchmark/run.jl --compare eca7cb4
#
# This script produces/overwrites two files:
# + `benchmark/results.json` : results from a benchmark run w/o comparison
# + `benchmark/results-target.json` : target results from a benchmark run w/ comparison
# + `benchmark/results-baseline.json` : baseline results from a benchmark run w/ comparison
# + `benchmark/report.md` : report summarizing the results
## Make sure the benchmark environment is activated and load utilities
@info("Loading benchmark environment")
import Pkg
Pkg.activate(@__DIR__)
Pkg.instantiate()
include("utils.jl")
## Parse command line arguments
@info("Parsing arguments")
using ArgParse
settings = ArgParseSettings()
@add_arg_table settings begin
"--suite"
help = "which suite to run benchmarks for"
arg_type = String
"--compare"
help = "git id for previous commit to compare current version against"
arg_type = String
end
parsed_args = parse_args(settings)
## Run benchmarks
@info("Running benchmarks")
using GCPDecompositions, PkgBenchmark
### Create ENV for BenchmarkConfig
GCP_ENV =
isnothing(parsed_args["suite"]) ? Dict{String,Any}() :
Dict("GCP_BENCHMARK_SUITES" => parsed_args["suite"])
### Run benchmark and save
if isnothing(parsed_args["compare"])
results = benchmarkpkg(GCPDecompositions, BenchmarkConfig(; env = GCP_ENV))
writeresults(joinpath(@__DIR__, "results.json"), results)
else
results = judge(
GCPDecompositions,
BenchmarkConfig(; env = GCP_ENV),
BenchmarkConfig(; env = GCP_ENV, id = parsed_args["compare"]),
)
writeresults(
joinpath(@__DIR__, "results-target.json"),
PkgBenchmark.target_result(results),
)
writeresults(
joinpath(@__DIR__, "results-baseline.json"),
PkgBenchmark.baseline_result(results),
)
end
## Generate report and save
@info("Generating report")
### Build report
report = sprint(export_markdown, results)
report *= "\n\n" * GCPBenchmarkUtils.export_mttkrp_sweep(results)
### Tidy up report
report = GCPBenchmarkUtils.collapsible_details(report)
report = replace(
report,
r"^# Benchmark Report for \S*\n" => "# Benchmark Report for `GCPDecompositions`\n";
count = 1,
)
### Save report
write(joinpath(@__DIR__, "report.md"), report)
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 8418 | # Utility functions for benchmarking
module GCPBenchmarkUtils
using BenchmarkTools, PkgBenchmark
using Dictionaries, SplitApplyCombine, UnicodePlots
## Insert <details> tags to get collapsible sections on GitHub
"""
collapsible_details(markdown; header_level=2)
Modify `markdown` to collapse at header level `header_level`.
"""
function collapsible_details(markdown; header_level = 2)
# Extract all the lines and find the header lines
lines = split(markdown, '\n')
header_lines = map(findall(contains(r"^(?<tag>#+) "), lines)) do idx
line = lines[idx]
level = length(match(r"^(?<tag>#+) ", line)[:tag])
return idx => level
end
# Filter out subheaders (level above `header_level`)
header_lines = filter(header_lines) do (_, level)
return level <= header_level
end
# Loop through and insert "<details>" tags
tag_opened = false
for (idx, level) in header_lines
# Close prior opened tag
if tag_opened
lines[idx] = string("\n</details>\n", '\n', lines[idx])
tag_opened = false
end
# Insert/open new tag for headers at `header_level`
if level == header_level
lines[idx] = string(lines[idx], '\n', "\n<details>\n")
tag_opened = true
end
end
if tag_opened
lines[end] = string(lines[end], '\n', "\n</details>\n")
tag_opened = false
end
# Form full string
new_markdown = join(lines, '\n')
# Tidy up spacing with "<details>" tags
new_markdown = replace(
new_markdown,
r"<details>\n\n(?<extra>\n+)" => s"\g<extra><details>\n\n",
r"(?<extra>\n+)\n\n<\/details>" => s"\n\n</details>\g<extra>",
)
return new_markdown
end
## MTTKRP sweep
"""
export_mttkrp_sweep(results::BenchmarkResults)
export_mttkrp_sweep(results::BenchmarkJudgement)
export_mttkrp_sweep(results::Vector{Pair{String,BenchmarkResults}})
export_mttkrp_sweep(results::Vector{Pair{String,BenchmarkGroup}})
Generate a markdown report for the MTTKRP sweeps from `results`.
!!! note
Using a separate plot for each curve since it would be great
to be able to copy into GitHub comments (e.g., within PRs)
but GitHub doesn't currently handle ANSI color codes
(https://github.com/github/markup/issues/1538).
Tried working around by converting to HTML with ANSIColoredPrinters.jl
(https://github.com/JuliaDocs/ANSIColoredPrinters.jl)
which is the package that Documenter.jl uses to support ANSI color codes
(https://github.com/JuliaDocs/Documenter.jl/pull/1441/files).
However, GitHub also doesn't currently support coloring the text
(https://github.com/github/markup/issues/1440).
"""
export_mttkrp_sweep(results::BenchmarkResults) = export_mttkrp_sweep(["" => results])
export_mttkrp_sweep(results::BenchmarkJudgement) = export_mttkrp_sweep([
"Target" => PkgBenchmark.target_result(results),
"Baseline" => PkgBenchmark.baseline_result(results),
])
function export_mttkrp_sweep(results_list::Vector{Pair{String,BenchmarkResults}})
# Extract the mttkrp suites
results_list = map(results_list) do (name, results)
bg = PkgBenchmark.benchmarkgroup(results)
return haskey(bg, "mttkrp") ? name => bg["mttkrp"] : nothing
end
results_list = filter(!isnothing, results_list)
isempty(results_list) && return ""
# Call the main method
return export_mttkrp_sweep(results_list)
end
function export_mttkrp_sweep(results_list::Vector{Pair{String,BenchmarkGroup}})
isempty(results_list) && return ""
# Load the results into dictionaries
result_names = first.(results_list)
result_dicts = map(last.(results_list)) do results
return (sortkeys ∘ dictionary ∘ map)(results) do (key_str, result)
key_vals = match(
r"^size=\((?<size>[0-9, ]*)\), rank=(?<rank>[0-9]+), mode=(?<mode>[0-9]+)$",
key_str,
)
key = (;
size = Tuple(parse.(Int, split(key_vals[:size], ','))),
rank = parse(Int, key_vals[:rank]),
mode = parse(Int, key_vals[:mode]),
)
return key => result
end
end
all_keys = (sort ∘ unique ∘ mapmany)(keys, result_dicts)
# Runtime vs. size (for square tensors)
size_group_keys = group(
key -> (; ndims = length(key.size), rank = key.rank, mode = key.mode),
filter(key -> allequal(key.size), all_keys),
)
size_groups = collect(keys(size_group_keys))
size_plots = product(result_dicts, size_groups) do result_dict, size_group
sweep_keys = filter(in(keys(result_dict)), size_group_keys[size_group])
isempty(sweep_keys) && return nothing
return lineplot(
only.(unique.(getindex.(sweep_keys, :size))),
getproperty.(median.(getindices(result_dict, sweep_keys)), :time) ./ 1e6;
title = pretty_str(size_group),
xlabel = "Size",
ylabel = "Time (ms)",
canvas = DotCanvas,
width = 30,
height = 10,
margin = 0,
)
end
# Runtime vs. rank
rank_group_keys = group(key -> (; size = key.size, mode = key.mode), all_keys)
rank_groups = collect(keys(rank_group_keys))
rank_plots = product(result_dicts, rank_groups) do result_dict, rank_group
sweep_keys = filter(in(keys(result_dict)), rank_group_keys[rank_group])
isempty(sweep_keys) && return nothing
return lineplot(
getindex.(sweep_keys, :rank),
getproperty.(median.(getindices(result_dict, sweep_keys)), :time) ./ 1e6;
title = pretty_str(rank_group),
xlabel = "Rank",
ylabel = "Time (ms)",
canvas = DotCanvas,
width = 30,
height = 10,
margin = 0,
)
end
# Runtime vs. mode
mode_group_keys = group(key -> (; size = key.size, rank = key.rank), all_keys)
mode_groups = collect(keys(mode_group_keys))
mode_plots = product(result_dicts, mode_groups) do result_dict, mode_group
sweep_keys = filter(in(keys(result_dict)), mode_group_keys[mode_group])
isempty(sweep_keys) && return nothing
return boxplot(
string.("mode ", getindex.(sweep_keys, :mode)),
getproperty.(getindices(result_dict, sweep_keys), :times) ./ 1e6;
title = pretty_str(mode_group),
xlabel = "Time (ms)",
canvas = DotCanvas,
width = 30,
height = 10,
margin = 0,
)
end
return """
# MTTKRP benchmark plots
## Runtime vs. size (for square tensors)
Below are plots showing the runtime in miliseconds of MTTKRP as a function of the size of the square tensor, for varying ranks and modes:
$(plot_table(size_plots, pretty_str.(size_groups), result_names))
## Runtime vs. rank
Below are plots showing the runtime in miliseconds of MTTKRP as a function of the size of the rank, for varying sizes and modes:
$(plot_table(rank_plots, pretty_str.(rank_groups), result_names))
## Runtime vs. mode
Below are plots showing the runtime in miliseconds of MTTKRP as a function of the mode, for varying sizes and ranks:
$(plot_table(mode_plots, pretty_str.(mode_groups), result_names))
"""
end
# Create pretty string for plot titles, etc.
pretty_str(group::NamedTuple) = string(group)[begin+1:end-1]
# Create HTML table of plots
function plot_table(plots, colnames, rownames)
# Create strings for each plot
plot_strs = map(plots) do plot
return isnothing(plot) ? "NO RESULTS" : """```
$(string(plot; color=false))
```"""
end
# Create matrix of cell strings
cell_strs = [
"<th></th>" permutedims(string.("<th>", colnames, "</th>"))
string.("<th>", rownames, "</th>") string.("<td>\n\n", plot_strs, "\n\n</td>")
]
# Create vector of row strings
row_strs = string.("<tr>\n", join.(eachrow(cell_strs), '\n'), "\n</tr>")
# Return full table (with blank rows "<tr></tr>" to work around GitHub styling)
return string(
"<table>\n",
row_strs[1],
'\n',
join(row_strs[2:end], "\n<tr></tr>\n"),
"\n</table>",
)
end
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 1415 | module BenchmarkGCP
using BenchmarkTools, GCPDecompositions
using Random, Distributions
const SUITE = BenchmarkGroup()
# Benchmark least squares loss
for sz in [(15, 20, 25), (30, 40, 50)], r in 1:2
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
X = [M[I] for I in CartesianIndices(size(M))]
SUITE["least-squares-size(X)=$sz, rank(X)=$r"] =
@benchmarkable gcp($X, $r; loss = GCPLosses.LeastSquares())
end
# Benchmark Poisson loss
for sz in [(15, 20, 25), (30, 40, 50)], r in 1:2
Random.seed!(0)
M = CPD(fill(10.0, r), rand.(sz, r))
X = [rand(Poisson(M[I])) for I in CartesianIndices(size(M))]
SUITE["poisson-size(X)=$sz, rank(X)=$r"] =
@benchmarkable gcp($X, $r; loss = GCPLosses.Poisson())
end
# Benchmark Gamma loss
for sz in [(15, 20, 25), (30, 40, 50)], r in 1:2
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
k = 1.5
X = [rand(Gamma(k, M[I] / k)) for I in CartesianIndices(size(M))]
SUITE["gamma-size(X)=$sz, rank(X)=$r"] =
@benchmarkable gcp($X, $r; loss = GCPLosses.Gamma())
end
# Benchmark BernoulliOdds Loss
for sz in [(15, 20, 25), (30, 40, 50)], r in 1:2
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
X = [rand(Bernoulli(M[I] / (M[I] + 1))) for I in CartesianIndices(size(M))]
SUITE["bernoulliOdds-size(X)=$sz, rank(X)=$r"] =
@benchmarkable gcp($X, $r; loss = GCPLosses.BernoulliOdds())
end
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 1517 | module BenchmarkKhatriRao
using BenchmarkTools, GCPDecompositions
using Random
const SUITE = BenchmarkGroup()
# Collect setups
const SETUPS = []
## N=1 matrix
append!(
SETUPS,
[
(; size = sz, rank = r) for sz in [ntuple(n -> In, 1) for In in 30:30:90],
r in [5; 30:30:90]
],
)
## N=2 matrices (balanced)
append!(
SETUPS,
[
(; size = sz, rank = r) for sz in [ntuple(n -> In, 2) for In in 30:30:90],
r in [5; 30:30:90]
],
)
## N=3 matrices (balanced)
append!(
SETUPS,
[
(; size = sz, rank = r) for sz in [ntuple(n -> In, 3) for In in 30:30:90],
r in [5; 30:30:90]
],
)
## N=3 matrices (imbalanced)
append!(
SETUPS,
[
(; size = sz, rank = r) for
sz in [Tuple(circshift([30, 100, 1000], c)) for c in 0:2], r in [5; 30:30:90]
],
)
## N=4 matrices (balanced)
append!(
SETUPS,
[
(; size = sz, rank = r) for sz in [ntuple(n -> In, 4) for In in 30:30:90],
r in [5; 30:30:90]
],
)
## N=4 matrices (imbalanced)
append!(
SETUPS,
[
(; size = sz, rank = r) for
sz in [Tuple(circshift([20, 40, 80, 500], c)) for c in 0:3], r in [5; 30:30:90]
],
)
# Generate random benchmarks
for SETUP in SETUPS
Random.seed!(0)
U = [randn(In, SETUP.rank) for In in SETUP.size]
SUITE["size=$(SETUP.size), rank=$(SETUP.rank)"] = @benchmarkable(
GCPDecompositions.TensorKernels.khatrirao($U...),
seconds = 2,
samples = 5,
)
end
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 1143 | module BenchmarkLeastSquares
using BenchmarkTools, GCPDecompositions
using Random, Distributions
const SUITE = BenchmarkGroup()
# More thorough benchmarks for least squares than gcp benchmarks
# Order-3 tensors
for sz in [(15, 20, 25), (30, 40, 50), (60, 70, 80)], r in [1, 10, 50]
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
X = [M[I] for I in CartesianIndices(size(M))]
SUITE["size(X)=$sz, rank(X)=$r"] =
@benchmarkable gcp($X, $r; loss = GCPLosses.LeastSquares())
end
# Order-4 tensors
for sz in [(15, 20, 25, 30), (30, 40, 50, 60)], r in [1, 10, 50]
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
X = [M[I] for I in CartesianIndices(size(M))]
SUITE["least-squares-size(X)=$sz, rank(X)=$r"] =
@benchmarkable gcp($X, $r; loss = GCPLosses.LeastSquares())
end
# Order-5 tensors
for sz in [(15, 20, 25, 30, 35), (30, 30, 30, 30, 30)], r in [1, 10, 50]
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
X = [M[I] for I in CartesianIndices(size(M))]
SUITE["least-squares-size(X)=$sz, rank(X)=$r"] =
@benchmarkable gcp($X, $r; loss = GCPLosses.LeastSquares())
end
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 921 | module BenchmarkMTTKRPLarge
using BenchmarkTools, GCPDecompositions
using Random
const SUITE = BenchmarkGroup()
# Collect setups
const SETUPS = []
## Balanced order-4 tensors
append!(
SETUPS,
[
(; size = sz, rank = r, mode = n) for sz in [ntuple(n -> In, 4) for In in 20:20:80],
r in [10; 20:20:120], n in 1:4
],
)
## Imbalanced order-4 tensors
append!(
SETUPS,
[
(; size = sz, rank = r, mode = n) for sz in [(20, 40, 80, 500), (500, 80, 40, 20)],
r in [10; 100:100:300], n in 1:4
],
)
# Generate random benchmarks
for SETUP in SETUPS
Random.seed!(0)
X = randn(SETUP.size)
U = Tuple([randn(In, SETUP.rank) for In in SETUP.size])
SUITE["size=$(SETUP.size), rank=$(SETUP.rank), mode=$(SETUP.mode)"] = @benchmarkable(
GCPDecompositions.TensorKernels.mttkrp($X, $U, $(SETUP.mode)),
seconds = 2,
samples = 5,
)
end
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 1113 | module BenchmarkMTTKRP
using BenchmarkTools, GCPDecompositions
using Random
const SUITE = BenchmarkGroup()
# Collect setups
const SETUPS = []
## Balanced order-3 tensors
append!(
SETUPS,
[
(; size = sz, rank = r, mode = n) for
sz in [ntuple(n -> In, 3) for In in 50:50:200], r in [10; 50:50:300], n in 1:3
],
)
# ## Balanced order-4 tensors
# append!(
# SETUPS,
# [
# (; size = sz, rank = r, mode = n) for
# sz in [ntuple(n -> In, 4) for In in 30:30:120], r in 30:30:180, n in 1:4
# ],
# )
## Imbalanced tensors
append!(
SETUPS,
[
(; size = sz, rank = r, mode = n) for sz in [(30, 100, 1000), (1000, 100, 30)],
r in [10; 100:100:300], n in 1:3
],
)
# Generate random benchmarks
for SETUP in SETUPS
Random.seed!(0)
X = randn(SETUP.size)
U = Tuple([randn(In, SETUP.rank) for In in SETUP.size])
SUITE["size=$(SETUP.size), rank=$(SETUP.rank), mode=$(SETUP.mode)"] = @benchmarkable(
GCPDecompositions.TensorKernels.mttkrp($X, $U, $(SETUP.mode)),
seconds = 2,
samples = 5,
)
end
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 1484 | using Documenter, GCPDecompositions
using PlutoStaticHTML
using InteractiveUtils
# Render demos
DEMO_DIR = joinpath(pkgdir(GCPDecompositions), "docs", "src", "demos")
DEMO_DICT = build_notebooks(
BuildOptions(DEMO_DIR; previous_dir = DEMO_DIR, output_format = documenter_output),
OutputOptions(; append_build_context = true),
)
DEMO_FILES = keys(DEMO_DICT)
# Make docs
makedocs(;
modules = [GCPDecompositions],
sitename = "GCPDecompositions.jl",
pages = [
"Home" => "index.md",
"Quick start guide" => "quickstart.md",
"Manual" => [
"Overview" => "man/main.md",
"Loss functions" => "man/losses.md",
"Constraints" => "man/constraints.md",
"Algorithms" => "man/algorithms.md",
],
"Demos" => [
"Overview" => "demos/main.md",
[
get(
PlutoStaticHTML.Pluto.frontmatter(joinpath(DEMO_DIR, FILE)),
"title",
FILE,
) => joinpath("demos", "$(splitext(FILE)[1]).md") for FILE in DEMO_FILES
]...,
],
"Developer Docs" =>
["Tensor Kernels" => "dev/kernels.md", "Private functions" => "dev/private.md"],
],
format = Documenter.HTML(;
canonical = "https://dahong67.github.io/GCPDecompositions.jl",
size_threshold = 2^21
),
)
# Deploy docs
deploydocs(; repo = "github.com/dahong67/GCPDecompositions.jl.git")
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 335 | using GCPDecompositions
using LiveServer
# Make list of demos to ignore
DEMO_DIR = joinpath(pkgdir(GCPDecompositions), "docs", "src", "demos")
DEMO_FILES = ["$(splitext(f)[1]).md" for f in readdir(DEMO_DIR) if splitext(f)[2] == ".jl"]
# Serve the docs
servedocs(; launch_browser = true, skip_files = joinpath.(DEMO_DIR, DEMO_FILES))
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 58279 | ### A Pluto.jl notebook ###
# v0.19.42
#> [frontmatter]
#> title = "Amino Acids"
using Markdown
using InteractiveUtils
# ╔═╡ c49eb38c-eb28-4bb0-ac21-c93ee8d70f03
using CairoMakie, GCPDecompositions, LinearAlgebra
# ╔═╡ ffe10069-dd12-4b32-98fb-d7a7fc2934ad
using CacheVariables, MAT, ZipFile
# ╔═╡ 7a5e7efa-2a1c-11ef-23d9-7f2f50eb9a05
let
# DEFINE METADATA
TITLE = "Amino Acids"
AUTHORS = [
"David Hong" => "https://dahong.gitlab.io",
]
CREATED = "14 June 2024"
FILENAME = replace(basename(@__FILE__), r"#==#.*" => "")
# CREATE MARKDOWN WITH BADGES
function badge_md((label, message); color="blue")
label_esc = replace(label, " " => "%20", "_" => "__", "-" => "--")
message_esc = replace(message, " " => "%20", "_" => "__", "-" => "--")
""
end
badge_md(lblmsg, url; color="gold") = "[$(badge_md(lblmsg; color))]($url)"
"""
# $TITLE
$(badge_md(
"Download Notebook" => FILENAME, joinpath("..", FILENAME);
color="blue"
))
$(join([badge_md("Author" => NAME, URL) for (NAME, URL) in AUTHORS], '\n'))
$(badge_md("Created" => CREATED; color="floralwhite"))
""" |> Markdown.parse
end
# ╔═╡ 4fbecc23-0632-4d2b-a6f9-e3bd5c283fc1
md"""
This demo considers a well-known dataset of fluorescence measurements
for three amino acids.
Website: [`https://ucphchemometrics.com/2023/05/04/amino-acids-fluorescence-data/
`](https://ucphchemometrics.com/2023/05/04/amino-acids-fluorescence-data/
)
Analogous tutorial in Tensor Toolbox: [`https://www.tensortoolbox.org/cp_als_doc.html`](https://www.tensortoolbox.org/cp_als_doc.html)
Relevant papers:
1. Bro, R, Multi-way Analysis in the Food Industry. Models, Algorithms, and Applications. 1998. Ph.D. Thesis, University of Amsterdam (NL) & Royal Veterinary and Agricultural University (DK).
2. Kiers, H.A.L. (1998) A three-step algorithm for Candecomp/Parafac analysis of large data sets with multicollinearity, Journal of Chemometrics, 12, 155-171.
"""
# ╔═╡ d679bef8-9908-4314-ad7b-d024b1a88785
md"""
## Load data
The following code downloads the data file, extracts the data, and caches it.
"""
# ╔═╡ 8ae49c7e-f300-4adf-9b52-801da6ef2af2
data = @cache "amino-acids-cache/data.bson" let
# Download file
url = "https://ucphchemometrics.com/wp-content/uploads/2023/05/claus.zip"
zipname = download(url, tempname(@__DIR__))
# Extract MAT file
zipfile = ZipFile.Reader(zipname)
matname = tempname(@__DIR__)
write(matname, only(zipfile.files))
close(zipfile)
# Extract data
data = matread(matname)
# Clean up and output data
rm(zipname)
rm(matname)
data
end
# ╔═╡ e7cc256d-6138-4426-96e6-c12abc8979f5
X = data["X"]
# ╔═╡ 87dbb90a-6d5b-4af1-969f-1d301b5e5aae
md"""
The data tensor `X` is $(join(size(X), '×'))
and consists of measurements across
$(size(X, 1)) samples,
$(size(X, 2)) emissions,
and
$(size(X, 3)) excitations.
"""
# ╔═╡ 322758fd-d469-427e-93ec-87f98d57ec82
md"""
The emission and excitation wavelengths are:
"""
# ╔═╡ b5b8ef77-3bf9-4e79-89f6-dccc84990bbd
em_wave = dropdims(data["EmAx"]; dims=1)
# ╔═╡ 09983bad-c192-4b3a-b120-cfccc4041ce1
ex_wave = dropdims(data["ExAx"]; dims=1)
# ╔═╡ fba839a3-4bbb-4ed6-9faf-6d56e304befb
md"""
Next, we plot the fluorescence landscape for each sample.
"""
# ╔═╡ 34ec42fc-0e4d-40ea-a898-da9bd70ee74d
with_theme() do
fig = Figure(; size=(800,500))
# Loop through samples
for i in 1:size(X,1)
ax = Axis3(fig[fldmod1(i, 3)...]; title="Sample $i",
xlabel="Emission\nWavelength", xticks=250:100:450,
ylabel="Excitation\nWavelength", yticks=240:30:300,
zlabel=""
)
surface!(ax, em_wave, ex_wave, X[i,:,:])
end
rowgap!(fig.layout, 40)
colgap!(fig.layout, 50)
resize_to_layout!(fig)
fig
end
# ╔═╡ e727aaa6-7f0b-4083-9837-7415027c29c7
md"""
## Run CP Decomposition
"""
# ╔═╡ 667fae24-27e5-4079-a1a3-c22375371dc1
md"""
Conventional CP decomposition (i.e., with respect to the least-squares loss)
can be computed using `gcp` with its default arguments.
"""
# ╔═╡ 06a685a5-c33e-4a68-8b2c-c976ea537b8c
M = gcp(X, 3)
# ╔═╡ 2d5e2926-e087-4b4e-abfb-7faabecf66eb
md"""
Now, we plot the (normalized) factors.
"""
# ╔═╡ cd0fc733-b2b9-44fd-b9c9-44f730ff1010
with_theme() do
fig = Figure()
# Plot factors (normalized by max)
for row in 1:ncomps(M)
barplot(fig[row,1], 1:size(X,1), normalize(M.U[1][:,row], Inf))
lines(fig[row,2], em_wave, normalize(M.U[2][:,row], Inf))
lines(fig[row,3], ex_wave, normalize(M.U[3][:,row], Inf))
end
# Link and hide x axes
linkxaxes!(contents(fig[:,1])...)
linkxaxes!(contents(fig[:,2])...)
linkxaxes!(contents(fig[:,3])...)
hidexdecorations!.(contents(fig[1:2,:]); ticks=false, grid=false)
# Link and hide y axes
linkyaxes!(contents(fig.layout)...)
hideydecorations!.(contents(fig.layout); ticks=false, grid=false)
# Add labels
Label(fig[0,1], "Samples"; tellwidth=false, fontsize=20)
Label(fig[0,2], "Emission"; tellwidth=false, fontsize=20)
Label(fig[0,3], "Excitation"; tellwidth=false, fontsize=20)
fig
end
# ╔═╡ 00000000-0000-0000-0000-000000000001
PLUTO_PROJECT_TOML_CONTENTS = """
[deps]
CacheVariables = "9a355d7c-ffe9-11e8-019f-21dae27d1722"
CairoMakie = "13f3f980-e62b-5c42-98c6-ff1f3baf88f0"
GCPDecompositions = "f59fb95b-1bc8-443b-b347-5e445a549f37"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
MAT = "23992714-dd62-5051-b70f-ba57cb901cac"
ZipFile = "a5390f91-8eb1-5f08-bee0-b1d1ffed6cea"
[compat]
CacheVariables = "~0.1.4"
CairoMakie = "~0.12.2"
GCPDecompositions = "~0.1.2"
MAT = "~0.10.7"
ZipFile = "~0.10.1"
"""
# ╔═╡ 00000000-0000-0000-0000-000000000002
PLUTO_MANIFEST_TOML_CONTENTS = """
# This file is machine-generated - editing it directly is not advised
julia_version = "1.10.4"
manifest_format = "2.0"
project_hash = "2275c8cd3a87e8d9303cb2ef1677d874a37fbab9"
[[deps.AbstractFFTs]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "d92ad398961a3ed262d8bf04a1a2b8340f915fef"
uuid = "621f4979-c628-5d54-868e-fcf4e3e8185c"
version = "1.5.0"
weakdeps = ["ChainRulesCore", "Test"]
[deps.AbstractFFTs.extensions]
AbstractFFTsChainRulesCoreExt = "ChainRulesCore"
AbstractFFTsTestExt = "Test"
[[deps.AbstractTrees]]
git-tree-sha1 = "2d9c9a55f9c93e8887ad391fbae72f8ef55e1177"
uuid = "1520ce14-60c1-5f80-bbc7-55ef81b5835c"
version = "0.4.5"
[[deps.Adapt]]
deps = ["LinearAlgebra", "Requires"]
git-tree-sha1 = "6a55b747d1812e699320963ffde36f1ebdda4099"
uuid = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
version = "4.0.4"
weakdeps = ["StaticArrays"]
[deps.Adapt.extensions]
AdaptStaticArraysExt = "StaticArrays"
[[deps.AliasTables]]
deps = ["PtrArrays", "Random"]
git-tree-sha1 = "9876e1e164b144ca45e9e3198d0b689cadfed9ff"
uuid = "66dad0bd-aa9a-41b7-9441-69ab47430ed8"
version = "1.1.3"
[[deps.Animations]]
deps = ["Colors"]
git-tree-sha1 = "e81c509d2c8e49592413bfb0bb3b08150056c79d"
uuid = "27a7e980-b3e6-11e9-2bcd-0b925532e340"
version = "0.4.1"
[[deps.ArgTools]]
uuid = "0dad84c5-d112-42e6-8d28-ef12dabb789f"
version = "1.1.1"
[[deps.Artifacts]]
uuid = "56f22d72-fd6d-98f1-02f0-08ddc0907c33"
[[deps.Automa]]
deps = ["PrecompileTools", "TranscodingStreams"]
git-tree-sha1 = "588e0d680ad1d7201d4c6a804dcb1cd9cba79fbb"
uuid = "67c07d97-cdcb-5c2c-af73-a7f9c32a568b"
version = "1.0.3"
[[deps.AxisAlgorithms]]
deps = ["LinearAlgebra", "Random", "SparseArrays", "WoodburyMatrices"]
git-tree-sha1 = "01b8ccb13d68535d73d2b0c23e39bd23155fb712"
uuid = "13072b0f-2c55-5437-9ae7-d433b7a33950"
version = "1.1.0"
[[deps.AxisArrays]]
deps = ["Dates", "IntervalSets", "IterTools", "RangeArrays"]
git-tree-sha1 = "16351be62963a67ac4083f748fdb3cca58bfd52f"
uuid = "39de3d68-74b9-583c-8d2d-e117c070f3a9"
version = "0.4.7"
[[deps.BSON]]
git-tree-sha1 = "4c3e506685c527ac6a54ccc0c8c76fd6f91b42fb"
uuid = "fbb218c0-5317-5bc6-957e-2ee96dd4b1f0"
version = "0.3.9"
[[deps.Base64]]
uuid = "2a0f44e3-6c83-55bd-87e4-b1978d98bd5f"
[[deps.BufferedStreams]]
git-tree-sha1 = "4ae47f9a4b1dc19897d3743ff13685925c5202ec"
uuid = "e1450e63-4bb3-523b-b2a4-4ffa8c0fd77d"
version = "1.2.1"
[[deps.Bzip2_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "9e2a6b69137e6969bab0152632dcb3bc108c8bdd"
uuid = "6e34b625-4abd-537c-b88f-471c36dfa7a0"
version = "1.0.8+1"
[[deps.CEnum]]
git-tree-sha1 = "389ad5c84de1ae7cf0e28e381131c98ea87d54fc"
uuid = "fa961155-64e5-5f13-b03f-caf6b980ea82"
version = "0.5.0"
[[deps.CRC32c]]
uuid = "8bf52ea8-c179-5cab-976a-9e18b702a9bc"
[[deps.CRlibm_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "e329286945d0cfc04456972ea732551869af1cfc"
uuid = "4e9b3aee-d8a1-5a3d-ad8b-7d824db253f0"
version = "1.0.1+0"
[[deps.CacheVariables]]
deps = ["BSON", "Logging"]
git-tree-sha1 = "0e74f35a57b1ebd6f622e47a18d92255cbd45b91"
uuid = "9a355d7c-ffe9-11e8-019f-21dae27d1722"
version = "0.1.4"
[[deps.Cairo]]
deps = ["Cairo_jll", "Colors", "Glib_jll", "Graphics", "Libdl", "Pango_jll"]
git-tree-sha1 = "d0b3f8b4ad16cb0a2988c6788646a5e6a17b6b1b"
uuid = "159f3aea-2a34-519c-b102-8c37f9878175"
version = "1.0.5"
[[deps.CairoMakie]]
deps = ["CRC32c", "Cairo", "Colors", "FileIO", "FreeType", "GeometryBasics", "LinearAlgebra", "Makie", "PrecompileTools"]
git-tree-sha1 = "9e8eaaff3e5951d8c61b7c9261d935eb27e0304b"
uuid = "13f3f980-e62b-5c42-98c6-ff1f3baf88f0"
version = "0.12.2"
[[deps.Cairo_jll]]
deps = ["Artifacts", "Bzip2_jll", "CompilerSupportLibraries_jll", "Fontconfig_jll", "FreeType2_jll", "Glib_jll", "JLLWrappers", "LZO_jll", "Libdl", "Pixman_jll", "Xorg_libXext_jll", "Xorg_libXrender_jll", "Zlib_jll", "libpng_jll"]
git-tree-sha1 = "a2f1c8c668c8e3cb4cca4e57a8efdb09067bb3fd"
uuid = "83423d85-b0ee-5818-9007-b63ccbeb887a"
version = "1.18.0+2"
[[deps.Calculus]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "f641eb0a4f00c343bbc32346e1217b86f3ce9dad"
uuid = "49dc2e85-a5d0-5ad3-a950-438e2897f1b9"
version = "0.5.1"
[[deps.ChainRulesCore]]
deps = ["Compat", "LinearAlgebra"]
git-tree-sha1 = "71acdbf594aab5bbb2cec89b208c41b4c411e49f"
uuid = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
version = "1.24.0"
weakdeps = ["SparseArrays"]
[deps.ChainRulesCore.extensions]
ChainRulesCoreSparseArraysExt = "SparseArrays"
[[deps.CodecZlib]]
deps = ["TranscodingStreams", "Zlib_jll"]
git-tree-sha1 = "59939d8a997469ee05c4b4944560a820f9ba0d73"
uuid = "944b1d66-785c-5afd-91f1-9de20f533193"
version = "0.7.4"
[[deps.ColorBrewer]]
deps = ["Colors", "JSON", "Test"]
git-tree-sha1 = "61c5334f33d91e570e1d0c3eb5465835242582c4"
uuid = "a2cac450-b92f-5266-8821-25eda20663c8"
version = "0.4.0"
[[deps.ColorSchemes]]
deps = ["ColorTypes", "ColorVectorSpace", "Colors", "FixedPointNumbers", "PrecompileTools", "Random"]
git-tree-sha1 = "4b270d6465eb21ae89b732182c20dc165f8bf9f2"
uuid = "35d6a980-a343-548e-a6ea-1d62b119f2f4"
version = "3.25.0"
[[deps.ColorTypes]]
deps = ["FixedPointNumbers", "Random"]
git-tree-sha1 = "b10d0b65641d57b8b4d5e234446582de5047050d"
uuid = "3da002f7-5984-5a60-b8a6-cbb66c0b333f"
version = "0.11.5"
[[deps.ColorVectorSpace]]
deps = ["ColorTypes", "FixedPointNumbers", "LinearAlgebra", "Requires", "Statistics", "TensorCore"]
git-tree-sha1 = "a1f44953f2382ebb937d60dafbe2deea4bd23249"
uuid = "c3611d14-8923-5661-9e6a-0046d554d3a4"
version = "0.10.0"
weakdeps = ["SpecialFunctions"]
[deps.ColorVectorSpace.extensions]
SpecialFunctionsExt = "SpecialFunctions"
[[deps.Colors]]
deps = ["ColorTypes", "FixedPointNumbers", "Reexport"]
git-tree-sha1 = "362a287c3aa50601b0bc359053d5c2468f0e7ce0"
uuid = "5ae59095-9a9b-59fe-a467-6f913c188581"
version = "0.12.11"
[[deps.CommonSubexpressions]]
deps = ["MacroTools", "Test"]
git-tree-sha1 = "7b8a93dba8af7e3b42fecabf646260105ac373f7"
uuid = "bbf7d656-a473-5ed7-a52c-81e309532950"
version = "0.3.0"
[[deps.Compat]]
deps = ["TOML", "UUIDs"]
git-tree-sha1 = "b1c55339b7c6c350ee89f2c1604299660525b248"
uuid = "34da2185-b29b-5c13-b0c7-acf172513d20"
version = "4.15.0"
weakdeps = ["Dates", "LinearAlgebra"]
[deps.Compat.extensions]
CompatLinearAlgebraExt = "LinearAlgebra"
[[deps.CompilerSupportLibraries_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "e66e0078-7015-5450-92f7-15fbd957f2ae"
version = "1.1.1+0"
[[deps.ConstructionBase]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "260fd2400ed2dab602a7c15cf10c1933c59930a2"
uuid = "187b0558-2788-49d3-abe0-74a17ed4e7c9"
version = "1.5.5"
weakdeps = ["IntervalSets", "StaticArrays"]
[deps.ConstructionBase.extensions]
ConstructionBaseIntervalSetsExt = "IntervalSets"
ConstructionBaseStaticArraysExt = "StaticArrays"
[[deps.Contour]]
git-tree-sha1 = "439e35b0b36e2e5881738abc8857bd92ad6ff9a8"
uuid = "d38c429a-6771-53c6-b99e-75d170b6e991"
version = "0.6.3"
[[deps.DataAPI]]
git-tree-sha1 = "abe83f3a2f1b857aac70ef8b269080af17764bbe"
uuid = "9a962f9c-6df0-11e9-0e5d-c546b8b5ee8a"
version = "1.16.0"
[[deps.DataStructures]]
deps = ["Compat", "InteractiveUtils", "OrderedCollections"]
git-tree-sha1 = "1d0a14036acb104d9e89698bd408f63ab58cdc82"
uuid = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
version = "0.18.20"
[[deps.DataValueInterfaces]]
git-tree-sha1 = "bfc1187b79289637fa0ef6d4436ebdfe6905cbd6"
uuid = "e2d170a0-9d28-54be-80f0-106bbe20a464"
version = "1.0.0"
[[deps.Dates]]
deps = ["Printf"]
uuid = "ade2ca70-3891-5945-98fb-dc099432e06a"
[[deps.DelaunayTriangulation]]
deps = ["EnumX", "ExactPredicates", "Random"]
git-tree-sha1 = "1755070db557ec2c37df2664c75600298b0c1cfc"
uuid = "927a84f5-c5f4-47a5-9785-b46e178433df"
version = "1.0.3"
[[deps.DiffResults]]
deps = ["StaticArraysCore"]
git-tree-sha1 = "782dd5f4561f5d267313f23853baaaa4c52ea621"
uuid = "163ba53b-c6d8-5494-b064-1a9d43ac40c5"
version = "1.1.0"
[[deps.DiffRules]]
deps = ["IrrationalConstants", "LogExpFunctions", "NaNMath", "Random", "SpecialFunctions"]
git-tree-sha1 = "23163d55f885173722d1e4cf0f6110cdbaf7e272"
uuid = "b552c78f-8df3-52c6-915a-8e097449b14b"
version = "1.15.1"
[[deps.Distributed]]
deps = ["Random", "Serialization", "Sockets"]
uuid = "8ba89e20-285c-5b6f-9357-94700520ee1b"
[[deps.Distributions]]
deps = ["AliasTables", "FillArrays", "LinearAlgebra", "PDMats", "Printf", "QuadGK", "Random", "SpecialFunctions", "Statistics", "StatsAPI", "StatsBase", "StatsFuns"]
git-tree-sha1 = "9c405847cc7ecda2dc921ccf18b47ca150d7317e"
uuid = "31c24e10-a181-5473-b8eb-7969acd0382f"
version = "0.25.109"
[deps.Distributions.extensions]
DistributionsChainRulesCoreExt = "ChainRulesCore"
DistributionsDensityInterfaceExt = "DensityInterface"
DistributionsTestExt = "Test"
[deps.Distributions.weakdeps]
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
DensityInterface = "b429d917-457f-4dbc-8f4c-0cc954292b1d"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
[[deps.DocStringExtensions]]
deps = ["LibGit2"]
git-tree-sha1 = "2fb1e02f2b635d0845df5d7c167fec4dd739b00d"
uuid = "ffbed154-4ef7-542d-bbb7-c09d3a79fcae"
version = "0.9.3"
[[deps.Downloads]]
deps = ["ArgTools", "FileWatching", "LibCURL", "NetworkOptions"]
uuid = "f43a241f-c20a-4ad4-852c-f6b1247861c6"
version = "1.6.0"
[[deps.DualNumbers]]
deps = ["Calculus", "NaNMath", "SpecialFunctions"]
git-tree-sha1 = "5837a837389fccf076445fce071c8ddaea35a566"
uuid = "fa6b7ba4-c1ee-5f82-b5fc-ecf0adba8f74"
version = "0.6.8"
[[deps.EarCut_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "e3290f2d49e661fbd94046d7e3726ffcb2d41053"
uuid = "5ae413db-bbd1-5e63-b57d-d24a61df00f5"
version = "2.2.4+0"
[[deps.EnumX]]
git-tree-sha1 = "bdb1942cd4c45e3c678fd11569d5cccd80976237"
uuid = "4e289a0a-7415-4d19-859d-a7e5c4648b56"
version = "1.0.4"
[[deps.ExactPredicates]]
deps = ["IntervalArithmetic", "Random", "StaticArrays"]
git-tree-sha1 = "b3f2ff58735b5f024c392fde763f29b057e4b025"
uuid = "429591f6-91af-11e9-00e2-59fbe8cec110"
version = "2.2.8"
[[deps.Expat_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "1c6317308b9dc757616f0b5cb379db10494443a7"
uuid = "2e619515-83b5-522b-bb60-26c02a35a201"
version = "2.6.2+0"
[[deps.Extents]]
git-tree-sha1 = "2140cd04483da90b2da7f99b2add0750504fc39c"
uuid = "411431e0-e8b7-467b-b5e0-f676ba4f2910"
version = "0.1.2"
[[deps.FFMPEG_jll]]
deps = ["Artifacts", "Bzip2_jll", "FreeType2_jll", "FriBidi_jll", "JLLWrappers", "LAME_jll", "Libdl", "Ogg_jll", "OpenSSL_jll", "Opus_jll", "PCRE2_jll", "Zlib_jll", "libaom_jll", "libass_jll", "libfdk_aac_jll", "libvorbis_jll", "x264_jll", "x265_jll"]
git-tree-sha1 = "ab3f7e1819dba9434a3a5126510c8fda3a4e7000"
uuid = "b22a6f82-2f65-5046-a5b2-351ab43fb4e5"
version = "6.1.1+0"
[[deps.FFTW]]
deps = ["AbstractFFTs", "FFTW_jll", "LinearAlgebra", "MKL_jll", "Preferences", "Reexport"]
git-tree-sha1 = "4820348781ae578893311153d69049a93d05f39d"
uuid = "7a1cc6ca-52ef-59f5-83cd-3a7055c09341"
version = "1.8.0"
[[deps.FFTW_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "c6033cc3892d0ef5bb9cd29b7f2f0331ea5184ea"
uuid = "f5851436-0d7a-5f13-b9de-f02708fd171a"
version = "3.3.10+0"
[[deps.FileIO]]
deps = ["Pkg", "Requires", "UUIDs"]
git-tree-sha1 = "82d8afa92ecf4b52d78d869f038ebfb881267322"
uuid = "5789e2e9-d7fb-5bc7-8068-2c6fae9b9549"
version = "1.16.3"
[[deps.FilePaths]]
deps = ["FilePathsBase", "MacroTools", "Reexport", "Requires"]
git-tree-sha1 = "919d9412dbf53a2e6fe74af62a73ceed0bce0629"
uuid = "8fc22ac5-c921-52a6-82fd-178b2807b824"
version = "0.8.3"
[[deps.FilePathsBase]]
deps = ["Compat", "Dates", "Mmap", "Printf", "Test", "UUIDs"]
git-tree-sha1 = "9f00e42f8d99fdde64d40c8ea5d14269a2e2c1aa"
uuid = "48062228-2e41-5def-b9a4-89aafe57970f"
version = "0.9.21"
[[deps.FileWatching]]
uuid = "7b1f6079-737a-58dc-b8bc-7a2ca5c1b5ee"
[[deps.FillArrays]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "0653c0a2396a6da5bc4766c43041ef5fd3efbe57"
uuid = "1a297f60-69ca-5386-bcde-b61e274b549b"
version = "1.11.0"
weakdeps = ["PDMats", "SparseArrays", "Statistics"]
[deps.FillArrays.extensions]
FillArraysPDMatsExt = "PDMats"
FillArraysSparseArraysExt = "SparseArrays"
FillArraysStatisticsExt = "Statistics"
[[deps.FixedPointNumbers]]
deps = ["Statistics"]
git-tree-sha1 = "05882d6995ae5c12bb5f36dd2ed3f61c98cbb172"
uuid = "53c48c17-4a7d-5ca2-90c5-79b7896eea93"
version = "0.8.5"
[[deps.Fontconfig_jll]]
deps = ["Artifacts", "Bzip2_jll", "Expat_jll", "FreeType2_jll", "JLLWrappers", "Libdl", "Libuuid_jll", "Zlib_jll"]
git-tree-sha1 = "db16beca600632c95fc8aca29890d83788dd8b23"
uuid = "a3f928ae-7b40-5064-980b-68af3947d34b"
version = "2.13.96+0"
[[deps.Format]]
git-tree-sha1 = "9c68794ef81b08086aeb32eeaf33531668d5f5fc"
uuid = "1fa38f19-a742-5d3f-a2b9-30dd87b9d5f8"
version = "1.3.7"
[[deps.ForwardDiff]]
deps = ["CommonSubexpressions", "DiffResults", "DiffRules", "LinearAlgebra", "LogExpFunctions", "NaNMath", "Preferences", "Printf", "Random", "SpecialFunctions"]
git-tree-sha1 = "cf0fe81336da9fb90944683b8c41984b08793dad"
uuid = "f6369f11-7733-5829-9624-2563aa707210"
version = "0.10.36"
weakdeps = ["StaticArrays"]
[deps.ForwardDiff.extensions]
ForwardDiffStaticArraysExt = "StaticArrays"
[[deps.FreeType]]
deps = ["CEnum", "FreeType2_jll"]
git-tree-sha1 = "907369da0f8e80728ab49c1c7e09327bf0d6d999"
uuid = "b38be410-82b0-50bf-ab77-7b57e271db43"
version = "4.1.1"
[[deps.FreeType2_jll]]
deps = ["Artifacts", "Bzip2_jll", "JLLWrappers", "Libdl", "Zlib_jll"]
git-tree-sha1 = "5c1d8ae0efc6c2e7b1fc502cbe25def8f661b7bc"
uuid = "d7e528f0-a631-5988-bf34-fe36492bcfd7"
version = "2.13.2+0"
[[deps.FreeTypeAbstraction]]
deps = ["ColorVectorSpace", "Colors", "FreeType", "GeometryBasics"]
git-tree-sha1 = "2493cdfd0740015955a8e46de4ef28f49460d8bc"
uuid = "663a7486-cb36-511b-a19d-713bb74d65c9"
version = "0.10.3"
[[deps.FriBidi_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "1ed150b39aebcc805c26b93a8d0122c940f64ce2"
uuid = "559328eb-81f9-559d-9380-de523a88c83c"
version = "1.0.14+0"
[[deps.GCPDecompositions]]
deps = ["ForwardDiff", "LBFGSB", "LinearAlgebra"]
git-tree-sha1 = "994de61253546641cdcef8fe8bc9667468662299"
uuid = "f59fb95b-1bc8-443b-b347-5e445a549f37"
version = "0.1.2"
[deps.GCPDecompositions.extensions]
LossFunctionsExt = "LossFunctions"
[deps.GCPDecompositions.weakdeps]
LossFunctions = "30fc2ffe-d236-52d8-8643-a9d8f7c094a7"
[[deps.GeoInterface]]
deps = ["Extents"]
git-tree-sha1 = "801aef8228f7f04972e596b09d4dba481807c913"
uuid = "cf35fbd7-0cd7-5166-be24-54bfbe79505f"
version = "1.3.4"
[[deps.GeometryBasics]]
deps = ["EarCut_jll", "Extents", "GeoInterface", "IterTools", "LinearAlgebra", "StaticArrays", "StructArrays", "Tables"]
git-tree-sha1 = "b62f2b2d76cee0d61a2ef2b3118cd2a3215d3134"
uuid = "5c1252a2-5f33-56bf-86c9-59e7332b4326"
version = "0.4.11"
[[deps.Gettext_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Libiconv_jll", "Pkg", "XML2_jll"]
git-tree-sha1 = "9b02998aba7bf074d14de89f9d37ca24a1a0b046"
uuid = "78b55507-aeef-58d4-861c-77aaff3498b1"
version = "0.21.0+0"
[[deps.Glib_jll]]
deps = ["Artifacts", "Gettext_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Libiconv_jll", "Libmount_jll", "PCRE2_jll", "Zlib_jll"]
git-tree-sha1 = "7c82e6a6cd34e9d935e9aa4051b66c6ff3af59ba"
uuid = "7746bdde-850d-59dc-9ae8-88ece973131d"
version = "2.80.2+0"
[[deps.Graphics]]
deps = ["Colors", "LinearAlgebra", "NaNMath"]
git-tree-sha1 = "d61890399bc535850c4bf08e4e0d3a7ad0f21cbd"
uuid = "a2bd30eb-e257-5431-a919-1863eab51364"
version = "1.1.2"
[[deps.Graphite2_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "344bf40dcab1073aca04aa0df4fb092f920e4011"
uuid = "3b182d85-2403-5c21-9c21-1e1f0cc25472"
version = "1.3.14+0"
[[deps.GridLayoutBase]]
deps = ["GeometryBasics", "InteractiveUtils", "Observables"]
git-tree-sha1 = "fc713f007cff99ff9e50accba6373624ddd33588"
uuid = "3955a311-db13-416c-9275-1d80ed98e5e9"
version = "0.11.0"
[[deps.Grisu]]
git-tree-sha1 = "53bb909d1151e57e2484c3d1b53e19552b887fb2"
uuid = "42e2da0e-8278-4e71-bc24-59509adca0fe"
version = "1.0.2"
[[deps.HDF5]]
deps = ["Compat", "HDF5_jll", "Libdl", "MPIPreferences", "Mmap", "Preferences", "Printf", "Random", "Requires", "UUIDs"]
git-tree-sha1 = "e856eef26cf5bf2b0f95f8f4fc37553c72c8641c"
uuid = "f67ccb44-e63f-5c2f-98bd-6dc0ccc4ba2f"
version = "0.17.2"
[deps.HDF5.extensions]
MPIExt = "MPI"
[deps.HDF5.weakdeps]
MPI = "da04e1cc-30fd-572f-bb4f-1f8673147195"
[[deps.HDF5_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "LazyArtifacts", "LibCURL_jll", "Libdl", "MPICH_jll", "MPIPreferences", "MPItrampoline_jll", "MicrosoftMPI_jll", "OpenMPI_jll", "OpenSSL_jll", "TOML", "Zlib_jll", "libaec_jll"]
git-tree-sha1 = "82a471768b513dc39e471540fdadc84ff80ff997"
uuid = "0234f1f7-429e-5d53-9886-15a909be8d59"
version = "1.14.3+3"
[[deps.HarfBuzz_jll]]
deps = ["Artifacts", "Cairo_jll", "Fontconfig_jll", "FreeType2_jll", "Glib_jll", "Graphite2_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Pkg"]
git-tree-sha1 = "129acf094d168394e80ee1dc4bc06ec835e510a3"
uuid = "2e76f6c2-a576-52d4-95c1-20adfe4de566"
version = "2.8.1+1"
[[deps.Hwloc_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "ca0f6bf568b4bfc807e7537f081c81e35ceca114"
uuid = "e33a78d0-f292-5ffc-b300-72abe9b543c8"
version = "2.10.0+0"
[[deps.HypergeometricFunctions]]
deps = ["DualNumbers", "LinearAlgebra", "OpenLibm_jll", "SpecialFunctions"]
git-tree-sha1 = "f218fe3736ddf977e0e772bc9a586b2383da2685"
uuid = "34004b35-14d8-5ef3-9330-4cdb6864b03a"
version = "0.3.23"
[[deps.ImageAxes]]
deps = ["AxisArrays", "ImageBase", "ImageCore", "Reexport", "SimpleTraits"]
git-tree-sha1 = "2e4520d67b0cef90865b3ef727594d2a58e0e1f8"
uuid = "2803e5a7-5153-5ecf-9a86-9b4c37f5f5ac"
version = "0.6.11"
[[deps.ImageBase]]
deps = ["ImageCore", "Reexport"]
git-tree-sha1 = "eb49b82c172811fd2c86759fa0553a2221feb909"
uuid = "c817782e-172a-44cc-b673-b171935fbb9e"
version = "0.1.7"
[[deps.ImageCore]]
deps = ["ColorVectorSpace", "Colors", "FixedPointNumbers", "MappedArrays", "MosaicViews", "OffsetArrays", "PaddedViews", "PrecompileTools", "Reexport"]
git-tree-sha1 = "b2a7eaa169c13f5bcae8131a83bc30eff8f71be0"
uuid = "a09fc81d-aa75-5fe9-8630-4744c3626534"
version = "0.10.2"
[[deps.ImageIO]]
deps = ["FileIO", "IndirectArrays", "JpegTurbo", "LazyModules", "Netpbm", "OpenEXR", "PNGFiles", "QOI", "Sixel", "TiffImages", "UUIDs"]
git-tree-sha1 = "437abb322a41d527c197fa800455f79d414f0a3c"
uuid = "82e4d734-157c-48bb-816b-45c225c6df19"
version = "0.6.8"
[[deps.ImageMetadata]]
deps = ["AxisArrays", "ImageAxes", "ImageBase", "ImageCore"]
git-tree-sha1 = "355e2b974f2e3212a75dfb60519de21361ad3cb7"
uuid = "bc367c6b-8a6b-528e-b4bd-a4b897500b49"
version = "0.9.9"
[[deps.Imath_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "0936ba688c6d201805a83da835b55c61a180db52"
uuid = "905a6f67-0a94-5f89-b386-d35d92009cd1"
version = "3.1.11+0"
[[deps.IndirectArrays]]
git-tree-sha1 = "012e604e1c7458645cb8b436f8fba789a51b257f"
uuid = "9b13fd28-a010-5f03-acff-a1bbcff69959"
version = "1.0.0"
[[deps.Inflate]]
git-tree-sha1 = "d1b1b796e47d94588b3757fe84fbf65a5ec4a80d"
uuid = "d25df0c9-e2be-5dd7-82c8-3ad0b3e990b9"
version = "0.1.5"
[[deps.IntelOpenMP_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "be50fe8df3acbffa0274a744f1a99d29c45a57f4"
uuid = "1d5cc7b8-4909-519e-a0f8-d0f5ad9712d0"
version = "2024.1.0+0"
[[deps.InteractiveUtils]]
deps = ["Markdown"]
uuid = "b77e0a4c-d291-57a0-90e8-8db25a27a240"
[[deps.Interpolations]]
deps = ["Adapt", "AxisAlgorithms", "ChainRulesCore", "LinearAlgebra", "OffsetArrays", "Random", "Ratios", "Requires", "SharedArrays", "SparseArrays", "StaticArrays", "WoodburyMatrices"]
git-tree-sha1 = "88a101217d7cb38a7b481ccd50d21876e1d1b0e0"
uuid = "a98d9a8b-a2ab-59e6-89dd-64a1c18fca59"
version = "0.15.1"
weakdeps = ["Unitful"]
[deps.Interpolations.extensions]
InterpolationsUnitfulExt = "Unitful"
[[deps.IntervalArithmetic]]
deps = ["CRlibm_jll", "MacroTools", "RoundingEmulator"]
git-tree-sha1 = "433b0bb201cd76cb087b017e49244f10394ebe9c"
uuid = "d1acc4aa-44c8-5952-acd4-ba5d80a2a253"
version = "0.22.14"
[deps.IntervalArithmetic.extensions]
IntervalArithmeticDiffRulesExt = "DiffRules"
IntervalArithmeticForwardDiffExt = "ForwardDiff"
IntervalArithmeticRecipesBaseExt = "RecipesBase"
[deps.IntervalArithmetic.weakdeps]
DiffRules = "b552c78f-8df3-52c6-915a-8e097449b14b"
ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210"
RecipesBase = "3cdcf5f2-1ef4-517c-9805-6587b60abb01"
[[deps.IntervalSets]]
git-tree-sha1 = "dba9ddf07f77f60450fe5d2e2beb9854d9a49bd0"
uuid = "8197267c-284f-5f27-9208-e0e47529a953"
version = "0.7.10"
[deps.IntervalSets.extensions]
IntervalSetsRandomExt = "Random"
IntervalSetsRecipesBaseExt = "RecipesBase"
IntervalSetsStatisticsExt = "Statistics"
[deps.IntervalSets.weakdeps]
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
RecipesBase = "3cdcf5f2-1ef4-517c-9805-6587b60abb01"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
[[deps.IrrationalConstants]]
git-tree-sha1 = "630b497eafcc20001bba38a4651b327dcfc491d2"
uuid = "92d709cd-6900-40b7-9082-c6be49f344b6"
version = "0.2.2"
[[deps.Isoband]]
deps = ["isoband_jll"]
git-tree-sha1 = "f9b6d97355599074dc867318950adaa6f9946137"
uuid = "f1662d9f-8043-43de-a69a-05efc1cc6ff4"
version = "0.1.1"
[[deps.IterTools]]
git-tree-sha1 = "42d5f897009e7ff2cf88db414a389e5ed1bdd023"
uuid = "c8e1da08-722c-5040-9ed9-7db0dc04731e"
version = "1.10.0"
[[deps.IteratorInterfaceExtensions]]
git-tree-sha1 = "a3f24677c21f5bbe9d2a714f95dcd58337fb2856"
uuid = "82899510-4779-5014-852e-03e436cf321d"
version = "1.0.0"
[[deps.JLLWrappers]]
deps = ["Artifacts", "Preferences"]
git-tree-sha1 = "7e5d6779a1e09a36db2a7b6cff50942a0a7d0fca"
uuid = "692b3bcd-3c85-4b1f-b108-f13ce0eb3210"
version = "1.5.0"
[[deps.JSON]]
deps = ["Dates", "Mmap", "Parsers", "Unicode"]
git-tree-sha1 = "31e996f0a15c7b280ba9f76636b3ff9e2ae58c9a"
uuid = "682c06a0-de6a-54ab-a142-c8b1cf79cde6"
version = "0.21.4"
[[deps.JpegTurbo]]
deps = ["CEnum", "FileIO", "ImageCore", "JpegTurbo_jll", "TOML"]
git-tree-sha1 = "fa6d0bcff8583bac20f1ffa708c3913ca605c611"
uuid = "b835a17e-a41a-41e7-81f0-2f016b05efe0"
version = "0.1.5"
[[deps.JpegTurbo_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "c84a835e1a09b289ffcd2271bf2a337bbdda6637"
uuid = "aacddb02-875f-59d6-b918-886e6ef4fbf8"
version = "3.0.3+0"
[[deps.KernelDensity]]
deps = ["Distributions", "DocStringExtensions", "FFTW", "Interpolations", "StatsBase"]
git-tree-sha1 = "7d703202e65efa1369de1279c162b915e245eed1"
uuid = "5ab0869b-81aa-558d-bb23-cbf5423bbe9b"
version = "0.6.9"
[[deps.LAME_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "170b660facf5df5de098d866564877e119141cbd"
uuid = "c1c5ebd0-6772-5130-a774-d5fcae4a789d"
version = "3.100.2+0"
[[deps.LBFGSB]]
deps = ["L_BFGS_B_jll"]
git-tree-sha1 = "e2e6f53ee20605d0ea2be473480b7480bd5091b5"
uuid = "5be7bae1-8223-5378-bac3-9e7378a2f6e6"
version = "0.4.1"
[[deps.LLVMOpenMP_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "d986ce2d884d49126836ea94ed5bfb0f12679713"
uuid = "1d63c593-3942-5779-bab2-d838dc0a180e"
version = "15.0.7+0"
[[deps.LZO_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "70c5da094887fd2cae843b8db33920bac4b6f07d"
uuid = "dd4b983a-f0e5-5f8d-a1b7-129d4a5fb1ac"
version = "2.10.2+0"
[[deps.L_BFGS_B_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "77feda930ed3f04b2b0fbb5bea89e69d3677c6b0"
uuid = "81d17ec3-03a1-5e46-b53e-bddc35a13473"
version = "3.0.1+0"
[[deps.LaTeXStrings]]
git-tree-sha1 = "50901ebc375ed41dbf8058da26f9de442febbbec"
uuid = "b964fa9f-0449-5b57-a5c2-d3ea65f4040f"
version = "1.3.1"
[[deps.LazyArtifacts]]
deps = ["Artifacts", "Pkg"]
uuid = "4af54fe1-eca0-43a8-85a7-787d91b784e3"
[[deps.LazyModules]]
git-tree-sha1 = "a560dd966b386ac9ae60bdd3a3d3a326062d3c3e"
uuid = "8cdb02fc-e678-4876-92c5-9defec4f444e"
version = "0.3.1"
[[deps.LibCURL]]
deps = ["LibCURL_jll", "MozillaCACerts_jll"]
uuid = "b27032c2-a3e7-50c8-80cd-2d36dbcbfd21"
version = "0.6.4"
[[deps.LibCURL_jll]]
deps = ["Artifacts", "LibSSH2_jll", "Libdl", "MbedTLS_jll", "Zlib_jll", "nghttp2_jll"]
uuid = "deac9b47-8bc7-5906-a0fe-35ac56dc84c0"
version = "8.4.0+0"
[[deps.LibGit2]]
deps = ["Base64", "LibGit2_jll", "NetworkOptions", "Printf", "SHA"]
uuid = "76f85450-5226-5b5a-8eaa-529ad045b433"
[[deps.LibGit2_jll]]
deps = ["Artifacts", "LibSSH2_jll", "Libdl", "MbedTLS_jll"]
uuid = "e37daf67-58a4-590a-8e99-b0245dd2ffc5"
version = "1.6.4+0"
[[deps.LibSSH2_jll]]
deps = ["Artifacts", "Libdl", "MbedTLS_jll"]
uuid = "29816b5a-b9ab-546f-933c-edad1886dfa8"
version = "1.11.0+1"
[[deps.Libdl]]
uuid = "8f399da3-3557-5675-b5ff-fb832c97cbdb"
[[deps.Libffi_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "0b4a5d71f3e5200a7dff793393e09dfc2d874290"
uuid = "e9f186c6-92d2-5b65-8a66-fee21dc1b490"
version = "3.2.2+1"
[[deps.Libgcrypt_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Libgpg_error_jll"]
git-tree-sha1 = "9fd170c4bbfd8b935fdc5f8b7aa33532c991a673"
uuid = "d4300ac3-e22c-5743-9152-c294e39db1e4"
version = "1.8.11+0"
[[deps.Libgpg_error_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "fbb1f2bef882392312feb1ede3615ddc1e9b99ed"
uuid = "7add5ba3-2f88-524e-9cd5-f83b8a55f7b8"
version = "1.49.0+0"
[[deps.Libiconv_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "f9557a255370125b405568f9767d6d195822a175"
uuid = "94ce4f54-9a6c-5748-9c1c-f9c7231a4531"
version = "1.17.0+0"
[[deps.Libmount_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "0c4f9c4f1a50d8f35048fa0532dabbadf702f81e"
uuid = "4b2f31a3-9ecc-558c-b454-b3730dcb73e9"
version = "2.40.1+0"
[[deps.Libuuid_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "5ee6203157c120d79034c748a2acba45b82b8807"
uuid = "38a345b3-de98-5d2b-a5d3-14cd9215e700"
version = "2.40.1+0"
[[deps.LinearAlgebra]]
deps = ["Libdl", "OpenBLAS_jll", "libblastrampoline_jll"]
uuid = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
[[deps.LogExpFunctions]]
deps = ["DocStringExtensions", "IrrationalConstants", "LinearAlgebra"]
git-tree-sha1 = "a2d09619db4e765091ee5c6ffe8872849de0feea"
uuid = "2ab3a3ac-af41-5b50-aa03-7779005ae688"
version = "0.3.28"
[deps.LogExpFunctions.extensions]
LogExpFunctionsChainRulesCoreExt = "ChainRulesCore"
LogExpFunctionsChangesOfVariablesExt = "ChangesOfVariables"
LogExpFunctionsInverseFunctionsExt = "InverseFunctions"
[deps.LogExpFunctions.weakdeps]
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
ChangesOfVariables = "9e997f8a-9a97-42d5-a9f1-ce6bfc15e2c0"
InverseFunctions = "3587e190-3f89-42d0-90ee-14403ec27112"
[[deps.Logging]]
uuid = "56ddb016-857b-54e1-b83d-db4d58db5568"
[[deps.MAT]]
deps = ["BufferedStreams", "CodecZlib", "HDF5", "SparseArrays"]
git-tree-sha1 = "1d2dd9b186742b0f317f2530ddcbf00eebb18e96"
uuid = "23992714-dd62-5051-b70f-ba57cb901cac"
version = "0.10.7"
[[deps.MKL_jll]]
deps = ["Artifacts", "IntelOpenMP_jll", "JLLWrappers", "LazyArtifacts", "Libdl", "oneTBB_jll"]
git-tree-sha1 = "80b2833b56d466b3858d565adcd16a4a05f2089b"
uuid = "856f044c-d86e-5d09-b602-aeab76dc8ba7"
version = "2024.1.0+0"
[[deps.MPICH_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "Hwloc_jll", "JLLWrappers", "LazyArtifacts", "Libdl", "MPIPreferences", "TOML"]
git-tree-sha1 = "4099bb6809ac109bfc17d521dad33763bcf026b7"
uuid = "7cb0a576-ebde-5e09-9194-50597f1243b4"
version = "4.2.1+1"
[[deps.MPIPreferences]]
deps = ["Libdl", "Preferences"]
git-tree-sha1 = "c105fe467859e7f6e9a852cb15cb4301126fac07"
uuid = "3da0fdf6-3ccc-4f1b-acd9-58baa6c99267"
version = "0.1.11"
[[deps.MPItrampoline_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "LazyArtifacts", "Libdl", "MPIPreferences", "TOML"]
git-tree-sha1 = "8c35d5420193841b2f367e658540e8d9e0601ed0"
uuid = "f1f71cc9-e9ae-5b93-9b94-4fe0e1ad3748"
version = "5.4.0+0"
[[deps.MacroTools]]
deps = ["Markdown", "Random"]
git-tree-sha1 = "2fa9ee3e63fd3a4f7a9a4f4744a52f4856de82df"
uuid = "1914dd2f-81c6-5fcd-8719-6d5c9610ff09"
version = "0.5.13"
[[deps.Makie]]
deps = ["Animations", "Base64", "CRC32c", "ColorBrewer", "ColorSchemes", "ColorTypes", "Colors", "Contour", "Dates", "DelaunayTriangulation", "Distributions", "DocStringExtensions", "Downloads", "FFMPEG_jll", "FileIO", "FilePaths", "FixedPointNumbers", "Format", "FreeType", "FreeTypeAbstraction", "GeometryBasics", "GridLayoutBase", "ImageIO", "InteractiveUtils", "IntervalSets", "Isoband", "KernelDensity", "LaTeXStrings", "LinearAlgebra", "MacroTools", "MakieCore", "Markdown", "MathTeXEngine", "Observables", "OffsetArrays", "Packing", "PlotUtils", "PolygonOps", "PrecompileTools", "Printf", "REPL", "Random", "RelocatableFolders", "Scratch", "ShaderAbstractions", "Showoff", "SignedDistanceFields", "SparseArrays", "Statistics", "StatsBase", "StatsFuns", "StructArrays", "TriplotBase", "UnicodeFun", "Unitful"]
git-tree-sha1 = "ec3a60c9de787bc6ef119d13e07d4bfacceebb83"
uuid = "ee78f7c6-11fb-53f2-987a-cfe4a2b5a57a"
version = "0.21.2"
[[deps.MakieCore]]
deps = ["ColorTypes", "GeometryBasics", "IntervalSets", "Observables"]
git-tree-sha1 = "c1c9da1a69f6c635a60581c98da252958c844d70"
uuid = "20f20a25-4f0e-4fdf-b5d1-57303727442b"
version = "0.8.2"
[[deps.MappedArrays]]
git-tree-sha1 = "2dab0221fe2b0f2cb6754eaa743cc266339f527e"
uuid = "dbb5928d-eab1-5f90-85c2-b9b0edb7c900"
version = "0.4.2"
[[deps.Markdown]]
deps = ["Base64"]
uuid = "d6f4376e-aef5-505a-96c1-9c027394607a"
[[deps.MathTeXEngine]]
deps = ["AbstractTrees", "Automa", "DataStructures", "FreeTypeAbstraction", "GeometryBasics", "LaTeXStrings", "REPL", "RelocatableFolders", "UnicodeFun"]
git-tree-sha1 = "1865d0b8a2d91477c8b16b49152a32764c7b1f5f"
uuid = "0a4f8689-d25c-4efe-a92b-7142dfc1aa53"
version = "0.6.0"
[[deps.MbedTLS_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "c8ffd9c3-330d-5841-b78e-0817d7145fa1"
version = "2.28.2+1"
[[deps.MicrosoftMPI_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "f12a29c4400ba812841c6ace3f4efbb6dbb3ba01"
uuid = "9237b28f-5490-5468-be7b-bb81f5f5e6cf"
version = "10.1.4+2"
[[deps.Missings]]
deps = ["DataAPI"]
git-tree-sha1 = "ec4f7fbeab05d7747bdf98eb74d130a2a2ed298d"
uuid = "e1d29d7a-bbdc-5cf2-9ac0-f12de2c33e28"
version = "1.2.0"
[[deps.Mmap]]
uuid = "a63ad114-7e13-5084-954f-fe012c677804"
[[deps.MosaicViews]]
deps = ["MappedArrays", "OffsetArrays", "PaddedViews", "StackViews"]
git-tree-sha1 = "7b86a5d4d70a9f5cdf2dacb3cbe6d251d1a61dbe"
uuid = "e94cdb99-869f-56ef-bcf0-1ae2bcbe0389"
version = "0.3.4"
[[deps.MozillaCACerts_jll]]
uuid = "14a3606d-f60d-562e-9121-12d972cd8159"
version = "2023.1.10"
[[deps.NaNMath]]
deps = ["OpenLibm_jll"]
git-tree-sha1 = "0877504529a3e5c3343c6f8b4c0381e57e4387e4"
uuid = "77ba4419-2d1f-58cd-9bb1-8ffee604a2e3"
version = "1.0.2"
[[deps.Netpbm]]
deps = ["FileIO", "ImageCore", "ImageMetadata"]
git-tree-sha1 = "d92b107dbb887293622df7697a2223f9f8176fcd"
uuid = "f09324ee-3d7c-5217-9330-fc30815ba969"
version = "1.1.1"
[[deps.NetworkOptions]]
uuid = "ca575930-c2e3-43a9-ace4-1e988b2c1908"
version = "1.2.0"
[[deps.Observables]]
git-tree-sha1 = "7438a59546cf62428fc9d1bc94729146d37a7225"
uuid = "510215fc-4207-5dde-b226-833fc4488ee2"
version = "0.5.5"
[[deps.OffsetArrays]]
git-tree-sha1 = "e64b4f5ea6b7389f6f046d13d4896a8f9c1ba71e"
uuid = "6fe1bfb0-de20-5000-8ca7-80f57d26f881"
version = "1.14.0"
weakdeps = ["Adapt"]
[deps.OffsetArrays.extensions]
OffsetArraysAdaptExt = "Adapt"
[[deps.Ogg_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "887579a3eb005446d514ab7aeac5d1d027658b8f"
uuid = "e7412a2a-1a6e-54c0-be00-318e2571c051"
version = "1.3.5+1"
[[deps.OpenBLAS_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "Libdl"]
uuid = "4536629a-c528-5b80-bd46-f80d51c5b363"
version = "0.3.23+4"
[[deps.OpenEXR]]
deps = ["Colors", "FileIO", "OpenEXR_jll"]
git-tree-sha1 = "327f53360fdb54df7ecd01e96ef1983536d1e633"
uuid = "52e1d378-f018-4a11-a4be-720524705ac7"
version = "0.3.2"
[[deps.OpenEXR_jll]]
deps = ["Artifacts", "Imath_jll", "JLLWrappers", "Libdl", "Zlib_jll"]
git-tree-sha1 = "8292dd5c8a38257111ada2174000a33745b06d4e"
uuid = "18a262bb-aa17-5467-a713-aee519bc75cb"
version = "3.2.4+0"
[[deps.OpenLibm_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "05823500-19ac-5b8b-9628-191a04bc5112"
version = "0.8.1+2"
[[deps.OpenMPI_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "LazyArtifacts", "Libdl", "MPIPreferences", "TOML"]
git-tree-sha1 = "e25c1778a98e34219a00455d6e4384e017ea9762"
uuid = "fe0851c0-eecd-5654-98d4-656369965a5c"
version = "4.1.6+0"
[[deps.OpenSSL_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "a028ee3cb5641cccc4c24e90c36b0a4f7707bdf5"
uuid = "458c3c95-2e84-50aa-8efc-19380b2a3a95"
version = "3.0.14+0"
[[deps.OpenSpecFun_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "13652491f6856acfd2db29360e1bbcd4565d04f1"
uuid = "efe28fd5-8261-553b-a9e1-b2916fc3738e"
version = "0.5.5+0"
[[deps.Opus_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "51a08fb14ec28da2ec7a927c4337e4332c2a4720"
uuid = "91d4177d-7536-5919-b921-800302f37372"
version = "1.3.2+0"
[[deps.OrderedCollections]]
git-tree-sha1 = "dfdf5519f235516220579f949664f1bf44e741c5"
uuid = "bac558e1-5e72-5ebc-8fee-abe8a469f55d"
version = "1.6.3"
[[deps.PCRE2_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "efcefdf7-47ab-520b-bdef-62a2eaa19f15"
version = "10.42.0+1"
[[deps.PDMats]]
deps = ["LinearAlgebra", "SparseArrays", "SuiteSparse"]
git-tree-sha1 = "949347156c25054de2db3b166c52ac4728cbad65"
uuid = "90014a1f-27ba-587c-ab20-58faa44d9150"
version = "0.11.31"
[[deps.PNGFiles]]
deps = ["Base64", "CEnum", "ImageCore", "IndirectArrays", "OffsetArrays", "libpng_jll"]
git-tree-sha1 = "67186a2bc9a90f9f85ff3cc8277868961fb57cbd"
uuid = "f57f5aa1-a3ce-4bc8-8ab9-96f992907883"
version = "0.4.3"
[[deps.Packing]]
deps = ["GeometryBasics"]
git-tree-sha1 = "ec3edfe723df33528e085e632414499f26650501"
uuid = "19eb6ba3-879d-56ad-ad62-d5c202156566"
version = "0.5.0"
[[deps.PaddedViews]]
deps = ["OffsetArrays"]
git-tree-sha1 = "0fac6313486baae819364c52b4f483450a9d793f"
uuid = "5432bcbf-9aad-5242-b902-cca2824c8663"
version = "0.5.12"
[[deps.Pango_jll]]
deps = ["Artifacts", "Cairo_jll", "Fontconfig_jll", "FreeType2_jll", "FriBidi_jll", "Glib_jll", "HarfBuzz_jll", "JLLWrappers", "Libdl"]
git-tree-sha1 = "cb5a2ab6763464ae0f19c86c56c63d4a2b0f5bda"
uuid = "36c8627f-9965-5494-a995-c6b170f724f3"
version = "1.52.2+0"
[[deps.Parsers]]
deps = ["Dates", "PrecompileTools", "UUIDs"]
git-tree-sha1 = "8489905bcdbcfac64d1daa51ca07c0d8f0283821"
uuid = "69de0a69-1ddd-5017-9359-2bf0b02dc9f0"
version = "2.8.1"
[[deps.Pixman_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "LLVMOpenMP_jll", "Libdl"]
git-tree-sha1 = "35621f10a7531bc8fa58f74610b1bfb70a3cfc6b"
uuid = "30392449-352a-5448-841d-b1acce4e97dc"
version = "0.43.4+0"
[[deps.Pkg]]
deps = ["Artifacts", "Dates", "Downloads", "FileWatching", "LibGit2", "Libdl", "Logging", "Markdown", "Printf", "REPL", "Random", "SHA", "Serialization", "TOML", "Tar", "UUIDs", "p7zip_jll"]
uuid = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f"
version = "1.10.0"
[[deps.PkgVersion]]
deps = ["Pkg"]
git-tree-sha1 = "f9501cc0430a26bc3d156ae1b5b0c1b47af4d6da"
uuid = "eebad327-c553-4316-9ea0-9fa01ccd7688"
version = "0.3.3"
[[deps.PlotUtils]]
deps = ["ColorSchemes", "Colors", "Dates", "PrecompileTools", "Printf", "Random", "Reexport", "Statistics"]
git-tree-sha1 = "7b1a9df27f072ac4c9c7cbe5efb198489258d1f5"
uuid = "995b91a9-d308-5afd-9ec6-746e21dbc043"
version = "1.4.1"
[[deps.PolygonOps]]
git-tree-sha1 = "77b3d3605fc1cd0b42d95eba87dfcd2bf67d5ff6"
uuid = "647866c9-e3ac-4575-94e7-e3d426903924"
version = "0.1.2"
[[deps.PrecompileTools]]
deps = ["Preferences"]
git-tree-sha1 = "5aa36f7049a63a1528fe8f7c3f2113413ffd4e1f"
uuid = "aea7be01-6a6a-4083-8856-8a6e6704d82a"
version = "1.2.1"
[[deps.Preferences]]
deps = ["TOML"]
git-tree-sha1 = "9306f6085165d270f7e3db02af26a400d580f5c6"
uuid = "21216c6a-2e73-6563-6e65-726566657250"
version = "1.4.3"
[[deps.Printf]]
deps = ["Unicode"]
uuid = "de0858da-6303-5e67-8744-51eddeeeb8d7"
[[deps.ProgressMeter]]
deps = ["Distributed", "Printf"]
git-tree-sha1 = "763a8ceb07833dd51bb9e3bbca372de32c0605ad"
uuid = "92933f4c-e287-5a05-a399-4b506db050ca"
version = "1.10.0"
[[deps.PtrArrays]]
git-tree-sha1 = "f011fbb92c4d401059b2212c05c0601b70f8b759"
uuid = "43287f4e-b6f4-7ad1-bb20-aadabca52c3d"
version = "1.2.0"
[[deps.QOI]]
deps = ["ColorTypes", "FileIO", "FixedPointNumbers"]
git-tree-sha1 = "18e8f4d1426e965c7b532ddd260599e1510d26ce"
uuid = "4b34888f-f399-49d4-9bb3-47ed5cae4e65"
version = "1.0.0"
[[deps.QuadGK]]
deps = ["DataStructures", "LinearAlgebra"]
git-tree-sha1 = "9b23c31e76e333e6fb4c1595ae6afa74966a729e"
uuid = "1fd47b50-473d-5c70-9696-f719f8f3bcdc"
version = "2.9.4"
[[deps.REPL]]
deps = ["InteractiveUtils", "Markdown", "Sockets", "Unicode"]
uuid = "3fa0cd96-eef1-5676-8a61-b3b8758bbffb"
[[deps.Random]]
deps = ["SHA"]
uuid = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
[[deps.RangeArrays]]
git-tree-sha1 = "b9039e93773ddcfc828f12aadf7115b4b4d225f5"
uuid = "b3c3ace0-ae52-54e7-9d0b-2c1406fd6b9d"
version = "0.3.2"
[[deps.Ratios]]
deps = ["Requires"]
git-tree-sha1 = "1342a47bf3260ee108163042310d26f2be5ec90b"
uuid = "c84ed2f1-dad5-54f0-aa8e-dbefe2724439"
version = "0.4.5"
weakdeps = ["FixedPointNumbers"]
[deps.Ratios.extensions]
RatiosFixedPointNumbersExt = "FixedPointNumbers"
[[deps.Reexport]]
git-tree-sha1 = "45e428421666073eab6f2da5c9d310d99bb12f9b"
uuid = "189a3867-3050-52da-a836-e630ba90ab69"
version = "1.2.2"
[[deps.RelocatableFolders]]
deps = ["SHA", "Scratch"]
git-tree-sha1 = "ffdaf70d81cf6ff22c2b6e733c900c3321cab864"
uuid = "05181044-ff0b-4ac5-8273-598c1e38db00"
version = "1.0.1"
[[deps.Requires]]
deps = ["UUIDs"]
git-tree-sha1 = "838a3a4188e2ded87a4f9f184b4b0d78a1e91cb7"
uuid = "ae029012-a4dd-5104-9daa-d747884805df"
version = "1.3.0"
[[deps.Rmath]]
deps = ["Random", "Rmath_jll"]
git-tree-sha1 = "f65dcb5fa46aee0cf9ed6274ccbd597adc49aa7b"
uuid = "79098fc4-a85e-5d69-aa6a-4863f24498fa"
version = "0.7.1"
[[deps.Rmath_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "d483cd324ce5cf5d61b77930f0bbd6cb61927d21"
uuid = "f50d1b31-88e8-58de-be2c-1cc44531875f"
version = "0.4.2+0"
[[deps.RoundingEmulator]]
git-tree-sha1 = "40b9edad2e5287e05bd413a38f61a8ff55b9557b"
uuid = "5eaf0fd0-dfba-4ccb-bf02-d820a40db705"
version = "0.2.1"
[[deps.SHA]]
uuid = "ea8e919c-243c-51af-8825-aaa63cd721ce"
version = "0.7.0"
[[deps.SIMD]]
deps = ["PrecompileTools"]
git-tree-sha1 = "2803cab51702db743f3fda07dd1745aadfbf43bd"
uuid = "fdea26ae-647d-5447-a871-4b548cad5224"
version = "3.5.0"
[[deps.Scratch]]
deps = ["Dates"]
git-tree-sha1 = "3bac05bc7e74a75fd9cba4295cde4045d9fe2386"
uuid = "6c6a2e73-6563-6170-7368-637461726353"
version = "1.2.1"
[[deps.Serialization]]
uuid = "9e88b42a-f829-5b0c-bbe9-9e923198166b"
[[deps.ShaderAbstractions]]
deps = ["ColorTypes", "FixedPointNumbers", "GeometryBasics", "LinearAlgebra", "Observables", "StaticArrays", "StructArrays", "Tables"]
git-tree-sha1 = "79123bc60c5507f035e6d1d9e563bb2971954ec8"
uuid = "65257c39-d410-5151-9873-9b3e5be5013e"
version = "0.4.1"
[[deps.SharedArrays]]
deps = ["Distributed", "Mmap", "Random", "Serialization"]
uuid = "1a1011a3-84de-559e-8e89-a11a2f7dc383"
[[deps.Showoff]]
deps = ["Dates", "Grisu"]
git-tree-sha1 = "91eddf657aca81df9ae6ceb20b959ae5653ad1de"
uuid = "992d4aef-0814-514b-bc4d-f2e9a6c4116f"
version = "1.0.3"
[[deps.SignedDistanceFields]]
deps = ["Random", "Statistics", "Test"]
git-tree-sha1 = "d263a08ec505853a5ff1c1ebde2070419e3f28e9"
uuid = "73760f76-fbc4-59ce-8f25-708e95d2df96"
version = "0.4.0"
[[deps.SimpleTraits]]
deps = ["InteractiveUtils", "MacroTools"]
git-tree-sha1 = "5d7e3f4e11935503d3ecaf7186eac40602e7d231"
uuid = "699a6c99-e7fa-54fc-8d76-47d257e15c1d"
version = "0.9.4"
[[deps.Sixel]]
deps = ["Dates", "FileIO", "ImageCore", "IndirectArrays", "OffsetArrays", "REPL", "libsixel_jll"]
git-tree-sha1 = "2da10356e31327c7096832eb9cd86307a50b1eb6"
uuid = "45858cf5-a6b0-47a3-bbea-62219f50df47"
version = "0.1.3"
[[deps.Sockets]]
uuid = "6462fe0b-24de-5631-8697-dd941f90decc"
[[deps.SortingAlgorithms]]
deps = ["DataStructures"]
git-tree-sha1 = "66e0a8e672a0bdfca2c3f5937efb8538b9ddc085"
uuid = "a2af1166-a08f-5f64-846c-94a0d3cef48c"
version = "1.2.1"
[[deps.SparseArrays]]
deps = ["Libdl", "LinearAlgebra", "Random", "Serialization", "SuiteSparse_jll"]
uuid = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
version = "1.10.0"
[[deps.SpecialFunctions]]
deps = ["IrrationalConstants", "LogExpFunctions", "OpenLibm_jll", "OpenSpecFun_jll"]
git-tree-sha1 = "2f5d4697f21388cbe1ff299430dd169ef97d7e14"
uuid = "276daf66-3868-5448-9aa4-cd146d93841b"
version = "2.4.0"
weakdeps = ["ChainRulesCore"]
[deps.SpecialFunctions.extensions]
SpecialFunctionsChainRulesCoreExt = "ChainRulesCore"
[[deps.StackViews]]
deps = ["OffsetArrays"]
git-tree-sha1 = "46e589465204cd0c08b4bd97385e4fa79a0c770c"
uuid = "cae243ae-269e-4f55-b966-ac2d0dc13c15"
version = "0.1.1"
[[deps.StaticArrays]]
deps = ["LinearAlgebra", "PrecompileTools", "Random", "StaticArraysCore"]
git-tree-sha1 = "6e00379a24597be4ae1ee6b2d882e15392040132"
uuid = "90137ffa-7385-5640-81b9-e52037218182"
version = "1.9.5"
weakdeps = ["ChainRulesCore", "Statistics"]
[deps.StaticArrays.extensions]
StaticArraysChainRulesCoreExt = "ChainRulesCore"
StaticArraysStatisticsExt = "Statistics"
[[deps.StaticArraysCore]]
git-tree-sha1 = "192954ef1208c7019899fbf8049e717f92959682"
uuid = "1e83bf80-4336-4d27-bf5d-d5a4f845583c"
version = "1.4.3"
[[deps.Statistics]]
deps = ["LinearAlgebra", "SparseArrays"]
uuid = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
version = "1.10.0"
[[deps.StatsAPI]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "1ff449ad350c9c4cbc756624d6f8a8c3ef56d3ed"
uuid = "82ae8749-77ed-4fe6-ae5f-f523153014b0"
version = "1.7.0"
[[deps.StatsBase]]
deps = ["DataAPI", "DataStructures", "LinearAlgebra", "LogExpFunctions", "Missings", "Printf", "Random", "SortingAlgorithms", "SparseArrays", "Statistics", "StatsAPI"]
git-tree-sha1 = "5cf7606d6cef84b543b483848d4ae08ad9832b21"
uuid = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
version = "0.34.3"
[[deps.StatsFuns]]
deps = ["HypergeometricFunctions", "IrrationalConstants", "LogExpFunctions", "Reexport", "Rmath", "SpecialFunctions"]
git-tree-sha1 = "cef0472124fab0695b58ca35a77c6fb942fdab8a"
uuid = "4c63d2b9-4356-54db-8cca-17b64c39e42c"
version = "1.3.1"
[deps.StatsFuns.extensions]
StatsFunsChainRulesCoreExt = "ChainRulesCore"
StatsFunsInverseFunctionsExt = "InverseFunctions"
[deps.StatsFuns.weakdeps]
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
InverseFunctions = "3587e190-3f89-42d0-90ee-14403ec27112"
[[deps.StructArrays]]
deps = ["ConstructionBase", "DataAPI", "Tables"]
git-tree-sha1 = "f4dc295e983502292c4c3f951dbb4e985e35b3be"
uuid = "09ab397b-f2b6-538f-b94a-2f83cf4a842a"
version = "0.6.18"
[deps.StructArrays.extensions]
StructArraysAdaptExt = "Adapt"
StructArraysGPUArraysCoreExt = "GPUArraysCore"
StructArraysSparseArraysExt = "SparseArrays"
StructArraysStaticArraysExt = "StaticArrays"
[deps.StructArrays.weakdeps]
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
GPUArraysCore = "46192b85-c4d5-4398-a991-12ede77f4527"
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"
[[deps.SuiteSparse]]
deps = ["Libdl", "LinearAlgebra", "Serialization", "SparseArrays"]
uuid = "4607b0f0-06f3-5cda-b6b1-a6196a1729e9"
[[deps.SuiteSparse_jll]]
deps = ["Artifacts", "Libdl", "libblastrampoline_jll"]
uuid = "bea87d4a-7f5b-5778-9afe-8cc45184846c"
version = "7.2.1+1"
[[deps.TOML]]
deps = ["Dates"]
uuid = "fa267f1f-6049-4f14-aa54-33bafae1ed76"
version = "1.0.3"
[[deps.TableTraits]]
deps = ["IteratorInterfaceExtensions"]
git-tree-sha1 = "c06b2f539df1c6efa794486abfb6ed2022561a39"
uuid = "3783bdb8-4a98-5b6b-af9a-565f29a5fe9c"
version = "1.0.1"
[[deps.Tables]]
deps = ["DataAPI", "DataValueInterfaces", "IteratorInterfaceExtensions", "LinearAlgebra", "OrderedCollections", "TableTraits"]
git-tree-sha1 = "cb76cf677714c095e535e3501ac7954732aeea2d"
uuid = "bd369af6-aec1-5ad0-b16a-f7cc5008161c"
version = "1.11.1"
[[deps.Tar]]
deps = ["ArgTools", "SHA"]
uuid = "a4e569a6-e804-4fa4-b0f3-eef7a1d5b13e"
version = "1.10.0"
[[deps.TensorCore]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "1feb45f88d133a655e001435632f019a9a1bcdb6"
uuid = "62fd8b95-f654-4bbd-a8a5-9c27f68ccd50"
version = "0.1.1"
[[deps.Test]]
deps = ["InteractiveUtils", "Logging", "Random", "Serialization"]
uuid = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
[[deps.TiffImages]]
deps = ["ColorTypes", "DataStructures", "DocStringExtensions", "FileIO", "FixedPointNumbers", "IndirectArrays", "Inflate", "Mmap", "OffsetArrays", "PkgVersion", "ProgressMeter", "SIMD", "UUIDs"]
git-tree-sha1 = "bc7fd5c91041f44636b2c134041f7e5263ce58ae"
uuid = "731e570b-9d59-4bfa-96dc-6df516fadf69"
version = "0.10.0"
[[deps.TranscodingStreams]]
git-tree-sha1 = "a947ea21087caba0a798c5e494d0bb78e3a1a3a0"
uuid = "3bb67fe8-82b1-5028-8e26-92a6c54297fa"
version = "0.10.9"
weakdeps = ["Random", "Test"]
[deps.TranscodingStreams.extensions]
TestExt = ["Test", "Random"]
[[deps.TriplotBase]]
git-tree-sha1 = "4d4ed7f294cda19382ff7de4c137d24d16adc89b"
uuid = "981d1d27-644d-49a2-9326-4793e63143c3"
version = "0.1.0"
[[deps.UUIDs]]
deps = ["Random", "SHA"]
uuid = "cf7118a7-6976-5b1a-9a39-7adc72f591a4"
[[deps.Unicode]]
uuid = "4ec0a83e-493e-50e2-b9ac-8f72acf5a8f5"
[[deps.UnicodeFun]]
deps = ["REPL"]
git-tree-sha1 = "53915e50200959667e78a92a418594b428dffddf"
uuid = "1cfade01-22cf-5700-b092-accc4b62d6e1"
version = "0.4.1"
[[deps.Unitful]]
deps = ["Dates", "LinearAlgebra", "Random"]
git-tree-sha1 = "dd260903fdabea27d9b6021689b3cd5401a57748"
uuid = "1986cc42-f94f-5a68-af5c-568840ba703d"
version = "1.20.0"
[deps.Unitful.extensions]
ConstructionBaseUnitfulExt = "ConstructionBase"
InverseFunctionsUnitfulExt = "InverseFunctions"
[deps.Unitful.weakdeps]
ConstructionBase = "187b0558-2788-49d3-abe0-74a17ed4e7c9"
InverseFunctions = "3587e190-3f89-42d0-90ee-14403ec27112"
[[deps.WoodburyMatrices]]
deps = ["LinearAlgebra", "SparseArrays"]
git-tree-sha1 = "c1a7aa6219628fcd757dede0ca95e245c5cd9511"
uuid = "efce3f68-66dc-5838-9240-27a6d6f5f9b6"
version = "1.0.0"
[[deps.XML2_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Libiconv_jll", "Zlib_jll"]
git-tree-sha1 = "52ff2af32e591541550bd753c0da8b9bc92bb9d9"
uuid = "02c8fc9c-b97f-50b9-bbe4-9be30ff0a78a"
version = "2.12.7+0"
[[deps.XSLT_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Libgcrypt_jll", "Libgpg_error_jll", "Libiconv_jll", "Pkg", "XML2_jll", "Zlib_jll"]
git-tree-sha1 = "91844873c4085240b95e795f692c4cec4d805f8a"
uuid = "aed1982a-8fda-507f-9586-7b0439959a61"
version = "1.1.34+0"
[[deps.Xorg_libX11_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Xorg_libxcb_jll", "Xorg_xtrans_jll"]
git-tree-sha1 = "afead5aba5aa507ad5a3bf01f58f82c8d1403495"
uuid = "4f6342f7-b3d2-589e-9d20-edeb45f2b2bc"
version = "1.8.6+0"
[[deps.Xorg_libXau_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "6035850dcc70518ca32f012e46015b9beeda49d8"
uuid = "0c0b7dd1-d40b-584c-a123-a41640f87eec"
version = "1.0.11+0"
[[deps.Xorg_libXdmcp_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "34d526d318358a859d7de23da945578e8e8727b7"
uuid = "a3789734-cfe1-5b06-b2d0-1dd0d9d62d05"
version = "1.1.4+0"
[[deps.Xorg_libXext_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Xorg_libX11_jll"]
git-tree-sha1 = "d2d1a5c49fae4ba39983f63de6afcbea47194e85"
uuid = "1082639a-0dae-5f34-9b06-72781eeb8cb3"
version = "1.3.6+0"
[[deps.Xorg_libXrender_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Xorg_libX11_jll"]
git-tree-sha1 = "47e45cd78224c53109495b3e324df0c37bb61fbe"
uuid = "ea2f1a96-1ddc-540d-b46f-429655e07cfa"
version = "0.9.11+0"
[[deps.Xorg_libpthread_stubs_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "8fdda4c692503d44d04a0603d9ac0982054635f9"
uuid = "14d82f49-176c-5ed1-bb49-ad3f5cbd8c74"
version = "0.1.1+0"
[[deps.Xorg_libxcb_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "XSLT_jll", "Xorg_libXau_jll", "Xorg_libXdmcp_jll", "Xorg_libpthread_stubs_jll"]
git-tree-sha1 = "b4bfde5d5b652e22b9c790ad00af08b6d042b97d"
uuid = "c7cfdc94-dc32-55de-ac96-5a1b8d977c5b"
version = "1.15.0+0"
[[deps.Xorg_xtrans_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "e92a1a012a10506618f10b7047e478403a046c77"
uuid = "c5fb5394-a638-5e4d-96e5-b29de1b5cf10"
version = "1.5.0+0"
[[deps.ZipFile]]
deps = ["Libdl", "Printf", "Zlib_jll"]
git-tree-sha1 = "f492b7fe1698e623024e873244f10d89c95c340a"
uuid = "a5390f91-8eb1-5f08-bee0-b1d1ffed6cea"
version = "0.10.1"
[[deps.Zlib_jll]]
deps = ["Libdl"]
uuid = "83775a58-1f1d-513f-b197-d71354ab007a"
version = "1.2.13+1"
[[deps.isoband_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "51b5eeb3f98367157a7a12a1fb0aa5328946c03c"
uuid = "9a68df92-36a6-505f-a73e-abb412b6bfb4"
version = "0.2.3+0"
[[deps.libaec_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "46bf7be2917b59b761247be3f317ddf75e50e997"
uuid = "477f73a3-ac25-53e9-8cc3-50b2fa2566f0"
version = "1.1.2+0"
[[deps.libaom_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "1827acba325fdcdf1d2647fc8d5301dd9ba43a9d"
uuid = "a4ae2306-e953-59d6-aa16-d00cac43593b"
version = "3.9.0+0"
[[deps.libass_jll]]
deps = ["Artifacts", "Bzip2_jll", "FreeType2_jll", "FriBidi_jll", "HarfBuzz_jll", "JLLWrappers", "Libdl", "Pkg", "Zlib_jll"]
git-tree-sha1 = "5982a94fcba20f02f42ace44b9894ee2b140fe47"
uuid = "0ac62f75-1d6f-5e53-bd7c-93b484bb37c0"
version = "0.15.1+0"
[[deps.libblastrampoline_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "8e850b90-86db-534c-a0d3-1478176c7d93"
version = "5.8.0+1"
[[deps.libfdk_aac_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "daacc84a041563f965be61859a36e17c4e4fcd55"
uuid = "f638f0a6-7fb0-5443-88ba-1cc74229b280"
version = "2.0.2+0"
[[deps.libpng_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Zlib_jll"]
git-tree-sha1 = "d7015d2e18a5fd9a4f47de711837e980519781a4"
uuid = "b53b4c65-9356-5827-b1ea-8c7a1a84506f"
version = "1.6.43+1"
[[deps.libsixel_jll]]
deps = ["Artifacts", "JLLWrappers", "JpegTurbo_jll", "Libdl", "Pkg", "libpng_jll"]
git-tree-sha1 = "d4f63314c8aa1e48cd22aa0c17ed76cd1ae48c3c"
uuid = "075b6546-f08a-558a-be8f-8157d0f608a5"
version = "1.10.3+0"
[[deps.libvorbis_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Ogg_jll", "Pkg"]
git-tree-sha1 = "b910cb81ef3fe6e78bf6acee440bda86fd6ae00c"
uuid = "f27f6e37-5d2b-51aa-960f-b287f2bc3b7a"
version = "1.3.7+1"
[[deps.nghttp2_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "8e850ede-7688-5339-a07c-302acd2aaf8d"
version = "1.52.0+1"
[[deps.oneTBB_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "7d0ea0f4895ef2f5cb83645fa689e52cb55cf493"
uuid = "1317d2d5-d96f-522e-a858-c73665f53c3e"
version = "2021.12.0+0"
[[deps.p7zip_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "3f19e933-33d8-53b3-aaab-bd5110c3b7a0"
version = "17.4.0+2"
[[deps.x264_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "4fea590b89e6ec504593146bf8b988b2c00922b2"
uuid = "1270edf5-f2f9-52d2-97e9-ab00b5d0237a"
version = "2021.5.5+0"
[[deps.x265_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "ee567a171cce03570d77ad3a43e90218e38937a9"
uuid = "dfaa095f-4041-5dcd-9319-2fabd8486b76"
version = "3.5.0+0"
"""
# ╔═╡ Cell order:
# ╟─7a5e7efa-2a1c-11ef-23d9-7f2f50eb9a05
# ╟─4fbecc23-0632-4d2b-a6f9-e3bd5c283fc1
# ╠═c49eb38c-eb28-4bb0-ac21-c93ee8d70f03
# ╟─d679bef8-9908-4314-ad7b-d024b1a88785
# ╠═ffe10069-dd12-4b32-98fb-d7a7fc2934ad
# ╠═8ae49c7e-f300-4adf-9b52-801da6ef2af2
# ╟─87dbb90a-6d5b-4af1-969f-1d301b5e5aae
# ╠═e7cc256d-6138-4426-96e6-c12abc8979f5
# ╟─322758fd-d469-427e-93ec-87f98d57ec82
# ╠═b5b8ef77-3bf9-4e79-89f6-dccc84990bbd
# ╠═09983bad-c192-4b3a-b120-cfccc4041ce1
# ╟─fba839a3-4bbb-4ed6-9faf-6d56e304befb
# ╠═34ec42fc-0e4d-40ea-a898-da9bd70ee74d
# ╟─e727aaa6-7f0b-4083-9837-7415027c29c7
# ╟─667fae24-27e5-4079-a1a3-c22375371dc1
# ╠═06a685a5-c33e-4a68-8b2c-c976ea537b8c
# ╟─2d5e2926-e087-4b4e-abfb-7faabecf66eb
# ╠═cd0fc733-b2b9-44fd-b9c9-44f730ff1010
# ╟─00000000-0000-0000-0000-000000000001
# ╟─00000000-0000-0000-0000-000000000002
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 62661 | ### A Pluto.jl notebook ###
# v0.19.42
#> [frontmatter]
#> title = "Monkey BMI"
using Markdown
using InteractiveUtils
# ╔═╡ fdfb1464-908f-437b-aaa4-5a9e32dd2feb
using CairoMakie, GCPDecompositions, LinearAlgebra, Statistics
# ╔═╡ a1db5ffd-620a-4a70-b859-e90c9d6aa5fb
using Downloads: download
# ╔═╡ 066b6f99-5135-49d2-a7de-cfb22ed72a3d
using CacheVariables, MAT
# ╔═╡ f9d70766-96c9-4d06-bc78-a2b1761cb9f6
let
# DEFINE METADATA
TITLE = "Monkey Brain Machine Interface"
AUTHORS = [
"Gianna Baker" => "",
"David Hong" => "https://dahong.gitlab.io/",
]
CREATED = "28 June 2024"
FILENAME = replace(basename(@__FILE__), r"#==#.*" => "")
# CREATE MARKDOWN WITH BADGES
function badge_md((label, message); color="blue")
label_esc = replace(label, " " => "%20", "_" => "__", "-" => "--")
message_esc = replace(message, " " => "%20", "_" => "__", "-" => "--")
""
end
function badge_md(lblmsg, url; color="gold")
badge = badge_md(lblmsg; color)
return isempty(url) ? badge : "[$(badge)]($url)"
end
"""
# $TITLE
$(badge_md(
"Download Notebook" => FILENAME, joinpath("..", FILENAME);
color="blue"
))
$(join([badge_md("Author" => NAME, URL) for (NAME, URL) in AUTHORS], '\n'))
$(badge_md("Created" => CREATED; color="floralwhite"))
""" |> Markdown.parse
end
# ╔═╡ 5c7c55d4-36b5-4703-9018-c334c9465d50
md"""
This demo is based on and uses data from the following Tensor Toolbox demo: [`https://gitlab.com/tensors/tensor_data_monkey_bmi`](https://gitlab.com/tensors/tensor_data_monkey_bmi)
Please see the license here: [`https://gitlab.com/tensors/tensor_data_monkey_bmi/-/blob/4870db135b362b2de499c63b48533abdd5185228/LICENSE`](https://gitlab.com/tensors/tensor_data_monkey_bmi/-/blob/4870db135b362b2de499c63b48533abdd5185228/LICENSE).
Relevant papers:
1. S. Vyas, N. Even-Chen, S. D. Stavisky, S. I. Ryu, P. Nuyujukian, and K. V. Shenoy, Neural Population Dynamics Underlying Motor Learning Transfer, Elsevier BV, Vol. 97, No. 5, pp. 1177-1186.e3, March 2018, [`https://doi.org/10.1016/j.neuron.2018.01.040`](https://doi.org/10.1016/j.neuron.2018.01.040).
2. S. Vyas, D. J. O'Shea, S. I. Ryu, and K. V. Shenoy, Causal Role of Motor Preparation during Error-Driven Learning, Neuron, Elsevier BV, Vol. 106, No. 2, pp. 329-339.e4, April 2020, [`https://doi.org/10.1016/j.neuron.2020.01.019`](https://doi.org/10.1016/j.neuron.2020.01.019).
3. A. H. Williams, T. H. Kim, F. Wang, S. Vyas, S. I. Ryu, K. V. Shenoy, M. Schnitzer, T. G. Kolda, S. Ganguli, Unsupervised Discovery of Demixed, Low-dimensional Neural Dynamics across Multiple Timescales through Tensor Components Analysis, Neuron, 98(6):1099-1115, 2018, [`https://doi.org/10.1016/j.neuron.2018.05.015`](https://doi.org/10.1016/j.neuron.2018.05.015).
"""
# ╔═╡ 9210e64a-6939-4ed5-b6d1-41685535b2c8
md"""
## Loading the data
The following code downloads the data file, extracts the data, and caches it.
"""
# ╔═╡ 2c945712-cbee-4a34-9a3b-ba602aa5fac0
data = cache(joinpath("monkey-bmi-cache", "data.bson")) do
# Download file
url = "https://gitlab.com/tensors/tensor_data_monkey_bmi/-/raw/main/data.mat"
path = download(url, tempname(@__DIR__))
# Extract data
data = matread(path)
# Clean up and output data
rm(path)
data
end
# ╔═╡ 712c91c0-0e41-4661-99ca-a17e50ce4f80
X = data["X"]
# ╔═╡ 224e7904-1004-4d9a-aec9-4dae9e797e3a
md"""
The data tensor `X` is $(join(size(X), '×'))
and consists of measurements across
$(size(X, 1)) neurons,
$(size(X, 2)) time steps,
and
$(size(X, 3)) trials.
"""
# ╔═╡ 7e86d631-195c-4647-a99a-7fb2da4e79ff
md"""
Each trial has an associated angle (described more below).
"""
# ╔═╡ 62c3f1b5-5cf7-4b69-8db4-9f380067667b
angles = dropdims(data["angle"]; dims=2)
# ╔═╡ c2de9482-7a09-4540-83d3-8d434200683a
md"""
## Understanding and visualizing the data
"""
# ╔═╡ 038c9c6a-a369-4504-8783-2a4c56c051ae
html"""
<figure style="margin:0">
<img
src="https://gitlab.com/tensors/tensor_data_monkey_bmi/-/raw/main/graphics/monkey_bmi_graphic.png"
alt="Monkey BMI Graphic"
style="width: 50%;float:left"
/>
<img
src="https://gitlab.com/tensors/tensor_data_monkey_bmi/-/raw/main/graphics/monkey_bmi_cursors.png"
alt="Monkey BMI Cursors"
style="width: 50%;float:left"
/>
<figcaption style="text-align:center">
Image Credit:
<a href="https://gitlab.com/tensors/tensor_data_monkey_bmi">
https://gitlab.com/tensors/tensor_data_monkey_bmi
</a>
</figcaption>
</figure>
"""
# ╔═╡ 52d5d6d5-f331-4d4b-a150-577706b3f87a
md"""
The data tensor is (pre-processed) neural data
from a Brain-Machine Interface (BMI) experiment (illustrated above).
In this experiment, a monkey uses the BMI to:
1. move the cursor to one of the four targets, then
2. hold the cursor on the target.
The targets are identified by their positions along a circle:
0, 90, 180, and -90 degrees.
While the monkey does these two tasks,
the BMI records neural spike data for many neurons over time.
This is then repeated for many trials.
After pre-processing,
the result is a
$(join(size(X), '×')) data tensor
of measurements across
$(size(X, 1)) neurons,
$(size(X, 2)) time steps,
and
$(size(X, 3)) trials.
The first 100 time steps correspond to the first task (acquire a target)
and the second 100 time steps correspond to the second task (hold the target).
"""
# ╔═╡ 1f95ecf2-f166-4e24-b124-d950cf4942d9
md"""
The following figure plots the time series in the data tensor.
Each subplot shows the time series from a single neuron
(each thin curve is the time series for a single trial, colored by target).
The thick curves show the average time series for each target.
"""
# ╔═╡ 14a0cf26-0003-45fe-b03c-8ad0140d26b2
angle_colors = Dict(0 => :tomato1, 90 => :gold, 180 => :darkorchid3, -90 => :cyan3);
# ╔═╡ 92ac6f39-7946-49dd-bc8c-b7f7ee430d66
with_theme() do
fig = Figure(; size = (800, 800))
# Plot time series
for (idx, data) in enumerate(eachslice(X; dims=1))
ax = Axis(fig[fldmod1(idx < 4 ? idx : idx+2, 5)...]; title="Neuron $idx",
xlabel="Time Steps", xticks=0:100:200,
ylabel="Activity", yticks=LinearTicks(3)
)
# Individual time series
series!(ax, permutedims(data);
color=[(angle_colors[angle], 0.7) for angle in angles], linewidth=0.2)
# Average time series
for angle in -90:90:180
lines!(ax, mean(eachcol(data)[angles .== angle]);
color=angle_colors[angle], linewidth=1.5)
end
end
# Tweak formatting
linkxaxes!(contents(fig.layout)...)
hidexdecorations!.(contents(fig[1:end-1, :]); ticks=false, grid=false)
hideydecorations!.(contents(fig[:, 2:end]);
ticklabels=false, ticks=false, grid=false)
rowgap!(fig.layout, 10)
colgap!(fig.layout, 10)
# Add legend
Legend(fig[1, 4:5],
[LineElement(; color=angle_colors[angle]) for angle in [0, 90, 180, -90]],
["$(angle)°" for angle in [0, 90, 180, -90]],
"Target Path Trajectory";
orientation = :horizontal
)
fig
end
# ╔═╡ 9cc4dfb7-ceb7-4d0f-99f6-dc48825c93e1
md"""
Note that the neurons have significantly varying overall levels of activity
(e.g., neuron 1 and neuron 43 differ by roughly a factor of four).
Likewise, there appears to generally be more activity
when acquiring the target (the first 100 time steps)
than when holding it (the second 100 time steps).
"""
# ╔═╡ f1266f66-0baf-45fa-aa20-a6279bff5cd8
md"""
## Run GCP Decomposition
Generalized CP decomposition with respect to non-negative least-squares
can be computed using `gcp` by setting the `loss` keyword argument.
"""
# ╔═╡ 2017d76d-1a5e-447d-b569-9edbb5c2cd13
M = gcp(X, 10; loss = GCPLosses.NonnegativeLeastSquares())
# ╔═╡ 401afd52-247f-4159-88c9-91b9ff75e925
md"""
Now, we plot the (normalized) factors.
"""
# ╔═╡ 09b91268-7365-4cae-88ce-21ab78e0ce8c
with_theme() do
fig = Figure(; size = (700, 700))
# Plot factors (normalized by max)
for row in 1:ncomps(M)
barplot(fig[row,1], normalize(M.U[1][:,row], Inf); color = :orange)
lines(fig[row,2], normalize(M.U[2][:,row], Inf); linewidth = 4)
scatter(fig[row,3], normalize(M.U[3][:,row], Inf);
color = [angle_colors[angle] for angle in angles])
end
# Link and hide x axes
linkxaxes!(contents(fig[:,1])...)
linkxaxes!(contents(fig[:,2])...)
linkxaxes!(contents(fig[:,3])...)
hidexdecorations!.(contents(fig[1:end-1,:]); ticks=false, grid=false)
# Link and hide y axes
linkyaxes!(contents(fig.layout)...)
hideydecorations!.(contents(fig.layout); ticks=false, grid=false)
# Add legend
Legend(fig[:, 4],
[MarkerElement(; color=angle_colors[angle], marker=:circle)
for angle in [0, 90, 180, -90]],
["$(angle)°" for angle in [0, 90, 180, -90]],
)
# Add labels
Label(fig[0,1], "Neurons"; tellwidth=false, fontsize=20)
Label(fig[0,2], "Time"; tellwidth=false, fontsize=20)
Label(fig[0,3], "Trials"; tellwidth=false, fontsize=20)
# Tweak layout
rowgap!(fig.layout, 10)
colgap!(fig.layout, 10)
colsize!(fig.layout, 2, Relative(1/4))
fig
end
# ╔═╡ 00ce15cd-404a-458e-b557-e1b5c55c41c2
md"""
Note that the factors in the neuron mode reflect our earlier observation
that the neurons are roughly in decreasing order of activity.
Moreover, several factors in the trial mode reflect which target was selected
even though the tensor decomposition was not given that information.
"""
# ╔═╡ 00000000-0000-0000-0000-000000000001
PLUTO_PROJECT_TOML_CONTENTS = """
[deps]
CacheVariables = "9a355d7c-ffe9-11e8-019f-21dae27d1722"
CairoMakie = "13f3f980-e62b-5c42-98c6-ff1f3baf88f0"
Downloads = "f43a241f-c20a-4ad4-852c-f6b1247861c6"
GCPDecompositions = "f59fb95b-1bc8-443b-b347-5e445a549f37"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
MAT = "23992714-dd62-5051-b70f-ba57cb901cac"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
[compat]
CacheVariables = "~0.1.4"
CairoMakie = "~0.12.2"
GCPDecompositions = "~0.2.0"
MAT = "~0.10.7"
"""
# ╔═╡ 00000000-0000-0000-0000-000000000002
PLUTO_MANIFEST_TOML_CONTENTS = """
# This file is machine-generated - editing it directly is not advised
julia_version = "1.10.4"
manifest_format = "2.0"
project_hash = "7e76945b51e60943527a8dbe1a791b7134dd1807"
[[deps.AbstractFFTs]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "d92ad398961a3ed262d8bf04a1a2b8340f915fef"
uuid = "621f4979-c628-5d54-868e-fcf4e3e8185c"
version = "1.5.0"
weakdeps = ["ChainRulesCore", "Test"]
[deps.AbstractFFTs.extensions]
AbstractFFTsChainRulesCoreExt = "ChainRulesCore"
AbstractFFTsTestExt = "Test"
[[deps.AbstractTrees]]
git-tree-sha1 = "2d9c9a55f9c93e8887ad391fbae72f8ef55e1177"
uuid = "1520ce14-60c1-5f80-bbc7-55ef81b5835c"
version = "0.4.5"
[[deps.Adapt]]
deps = ["LinearAlgebra", "Requires"]
git-tree-sha1 = "6a55b747d1812e699320963ffde36f1ebdda4099"
uuid = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
version = "4.0.4"
weakdeps = ["StaticArrays"]
[deps.Adapt.extensions]
AdaptStaticArraysExt = "StaticArrays"
[[deps.AliasTables]]
deps = ["PtrArrays", "Random"]
git-tree-sha1 = "9876e1e164b144ca45e9e3198d0b689cadfed9ff"
uuid = "66dad0bd-aa9a-41b7-9441-69ab47430ed8"
version = "1.1.3"
[[deps.Animations]]
deps = ["Colors"]
git-tree-sha1 = "e81c509d2c8e49592413bfb0bb3b08150056c79d"
uuid = "27a7e980-b3e6-11e9-2bcd-0b925532e340"
version = "0.4.1"
[[deps.ArgTools]]
uuid = "0dad84c5-d112-42e6-8d28-ef12dabb789f"
version = "1.1.1"
[[deps.Artifacts]]
uuid = "56f22d72-fd6d-98f1-02f0-08ddc0907c33"
[[deps.Automa]]
deps = ["PrecompileTools", "TranscodingStreams"]
git-tree-sha1 = "588e0d680ad1d7201d4c6a804dcb1cd9cba79fbb"
uuid = "67c07d97-cdcb-5c2c-af73-a7f9c32a568b"
version = "1.0.3"
[[deps.AxisAlgorithms]]
deps = ["LinearAlgebra", "Random", "SparseArrays", "WoodburyMatrices"]
git-tree-sha1 = "01b8ccb13d68535d73d2b0c23e39bd23155fb712"
uuid = "13072b0f-2c55-5437-9ae7-d433b7a33950"
version = "1.1.0"
[[deps.AxisArrays]]
deps = ["Dates", "IntervalSets", "IterTools", "RangeArrays"]
git-tree-sha1 = "16351be62963a67ac4083f748fdb3cca58bfd52f"
uuid = "39de3d68-74b9-583c-8d2d-e117c070f3a9"
version = "0.4.7"
[[deps.BSON]]
git-tree-sha1 = "4c3e506685c527ac6a54ccc0c8c76fd6f91b42fb"
uuid = "fbb218c0-5317-5bc6-957e-2ee96dd4b1f0"
version = "0.3.9"
[[deps.Base64]]
uuid = "2a0f44e3-6c83-55bd-87e4-b1978d98bd5f"
[[deps.BufferedStreams]]
git-tree-sha1 = "4ae47f9a4b1dc19897d3743ff13685925c5202ec"
uuid = "e1450e63-4bb3-523b-b2a4-4ffa8c0fd77d"
version = "1.2.1"
[[deps.Bzip2_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "9e2a6b69137e6969bab0152632dcb3bc108c8bdd"
uuid = "6e34b625-4abd-537c-b88f-471c36dfa7a0"
version = "1.0.8+1"
[[deps.CEnum]]
git-tree-sha1 = "389ad5c84de1ae7cf0e28e381131c98ea87d54fc"
uuid = "fa961155-64e5-5f13-b03f-caf6b980ea82"
version = "0.5.0"
[[deps.CRC32c]]
uuid = "8bf52ea8-c179-5cab-976a-9e18b702a9bc"
[[deps.CRlibm_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "e329286945d0cfc04456972ea732551869af1cfc"
uuid = "4e9b3aee-d8a1-5a3d-ad8b-7d824db253f0"
version = "1.0.1+0"
[[deps.CacheVariables]]
deps = ["BSON", "Logging"]
git-tree-sha1 = "0e74f35a57b1ebd6f622e47a18d92255cbd45b91"
uuid = "9a355d7c-ffe9-11e8-019f-21dae27d1722"
version = "0.1.4"
[[deps.Cairo]]
deps = ["Cairo_jll", "Colors", "Glib_jll", "Graphics", "Libdl", "Pango_jll"]
git-tree-sha1 = "d0b3f8b4ad16cb0a2988c6788646a5e6a17b6b1b"
uuid = "159f3aea-2a34-519c-b102-8c37f9878175"
version = "1.0.5"
[[deps.CairoMakie]]
deps = ["CRC32c", "Cairo", "Colors", "FileIO", "FreeType", "GeometryBasics", "LinearAlgebra", "Makie", "PrecompileTools"]
git-tree-sha1 = "9e8eaaff3e5951d8c61b7c9261d935eb27e0304b"
uuid = "13f3f980-e62b-5c42-98c6-ff1f3baf88f0"
version = "0.12.2"
[[deps.Cairo_jll]]
deps = ["Artifacts", "Bzip2_jll", "CompilerSupportLibraries_jll", "Fontconfig_jll", "FreeType2_jll", "Glib_jll", "JLLWrappers", "LZO_jll", "Libdl", "Pixman_jll", "Xorg_libXext_jll", "Xorg_libXrender_jll", "Zlib_jll", "libpng_jll"]
git-tree-sha1 = "a2f1c8c668c8e3cb4cca4e57a8efdb09067bb3fd"
uuid = "83423d85-b0ee-5818-9007-b63ccbeb887a"
version = "1.18.0+2"
[[deps.Calculus]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "f641eb0a4f00c343bbc32346e1217b86f3ce9dad"
uuid = "49dc2e85-a5d0-5ad3-a950-438e2897f1b9"
version = "0.5.1"
[[deps.ChainRulesCore]]
deps = ["Compat", "LinearAlgebra"]
git-tree-sha1 = "71acdbf594aab5bbb2cec89b208c41b4c411e49f"
uuid = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
version = "1.24.0"
weakdeps = ["SparseArrays"]
[deps.ChainRulesCore.extensions]
ChainRulesCoreSparseArraysExt = "SparseArrays"
[[deps.CodecZlib]]
deps = ["TranscodingStreams", "Zlib_jll"]
git-tree-sha1 = "59939d8a997469ee05c4b4944560a820f9ba0d73"
uuid = "944b1d66-785c-5afd-91f1-9de20f533193"
version = "0.7.4"
[[deps.ColorBrewer]]
deps = ["Colors", "JSON", "Test"]
git-tree-sha1 = "61c5334f33d91e570e1d0c3eb5465835242582c4"
uuid = "a2cac450-b92f-5266-8821-25eda20663c8"
version = "0.4.0"
[[deps.ColorSchemes]]
deps = ["ColorTypes", "ColorVectorSpace", "Colors", "FixedPointNumbers", "PrecompileTools", "Random"]
git-tree-sha1 = "4b270d6465eb21ae89b732182c20dc165f8bf9f2"
uuid = "35d6a980-a343-548e-a6ea-1d62b119f2f4"
version = "3.25.0"
[[deps.ColorTypes]]
deps = ["FixedPointNumbers", "Random"]
git-tree-sha1 = "b10d0b65641d57b8b4d5e234446582de5047050d"
uuid = "3da002f7-5984-5a60-b8a6-cbb66c0b333f"
version = "0.11.5"
[[deps.ColorVectorSpace]]
deps = ["ColorTypes", "FixedPointNumbers", "LinearAlgebra", "Requires", "Statistics", "TensorCore"]
git-tree-sha1 = "a1f44953f2382ebb937d60dafbe2deea4bd23249"
uuid = "c3611d14-8923-5661-9e6a-0046d554d3a4"
version = "0.10.0"
weakdeps = ["SpecialFunctions"]
[deps.ColorVectorSpace.extensions]
SpecialFunctionsExt = "SpecialFunctions"
[[deps.Colors]]
deps = ["ColorTypes", "FixedPointNumbers", "Reexport"]
git-tree-sha1 = "362a287c3aa50601b0bc359053d5c2468f0e7ce0"
uuid = "5ae59095-9a9b-59fe-a467-6f913c188581"
version = "0.12.11"
[[deps.CommonSubexpressions]]
deps = ["MacroTools", "Test"]
git-tree-sha1 = "7b8a93dba8af7e3b42fecabf646260105ac373f7"
uuid = "bbf7d656-a473-5ed7-a52c-81e309532950"
version = "0.3.0"
[[deps.Compat]]
deps = ["TOML", "UUIDs"]
git-tree-sha1 = "b1c55339b7c6c350ee89f2c1604299660525b248"
uuid = "34da2185-b29b-5c13-b0c7-acf172513d20"
version = "4.15.0"
weakdeps = ["Dates", "LinearAlgebra"]
[deps.Compat.extensions]
CompatLinearAlgebraExt = "LinearAlgebra"
[[deps.CompilerSupportLibraries_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "e66e0078-7015-5450-92f7-15fbd957f2ae"
version = "1.1.1+0"
[[deps.ConstructionBase]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "260fd2400ed2dab602a7c15cf10c1933c59930a2"
uuid = "187b0558-2788-49d3-abe0-74a17ed4e7c9"
version = "1.5.5"
weakdeps = ["IntervalSets", "StaticArrays"]
[deps.ConstructionBase.extensions]
ConstructionBaseIntervalSetsExt = "IntervalSets"
ConstructionBaseStaticArraysExt = "StaticArrays"
[[deps.Contour]]
git-tree-sha1 = "439e35b0b36e2e5881738abc8857bd92ad6ff9a8"
uuid = "d38c429a-6771-53c6-b99e-75d170b6e991"
version = "0.6.3"
[[deps.DataAPI]]
git-tree-sha1 = "abe83f3a2f1b857aac70ef8b269080af17764bbe"
uuid = "9a962f9c-6df0-11e9-0e5d-c546b8b5ee8a"
version = "1.16.0"
[[deps.DataStructures]]
deps = ["Compat", "InteractiveUtils", "OrderedCollections"]
git-tree-sha1 = "1d0a14036acb104d9e89698bd408f63ab58cdc82"
uuid = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
version = "0.18.20"
[[deps.DataValueInterfaces]]
git-tree-sha1 = "bfc1187b79289637fa0ef6d4436ebdfe6905cbd6"
uuid = "e2d170a0-9d28-54be-80f0-106bbe20a464"
version = "1.0.0"
[[deps.Dates]]
deps = ["Printf"]
uuid = "ade2ca70-3891-5945-98fb-dc099432e06a"
[[deps.DelaunayTriangulation]]
deps = ["EnumX", "ExactPredicates", "Random"]
git-tree-sha1 = "1755070db557ec2c37df2664c75600298b0c1cfc"
uuid = "927a84f5-c5f4-47a5-9785-b46e178433df"
version = "1.0.3"
[[deps.DiffResults]]
deps = ["StaticArraysCore"]
git-tree-sha1 = "782dd5f4561f5d267313f23853baaaa4c52ea621"
uuid = "163ba53b-c6d8-5494-b064-1a9d43ac40c5"
version = "1.1.0"
[[deps.DiffRules]]
deps = ["IrrationalConstants", "LogExpFunctions", "NaNMath", "Random", "SpecialFunctions"]
git-tree-sha1 = "23163d55f885173722d1e4cf0f6110cdbaf7e272"
uuid = "b552c78f-8df3-52c6-915a-8e097449b14b"
version = "1.15.1"
[[deps.Distributed]]
deps = ["Random", "Serialization", "Sockets"]
uuid = "8ba89e20-285c-5b6f-9357-94700520ee1b"
[[deps.Distributions]]
deps = ["AliasTables", "FillArrays", "LinearAlgebra", "PDMats", "Printf", "QuadGK", "Random", "SpecialFunctions", "Statistics", "StatsAPI", "StatsBase", "StatsFuns"]
git-tree-sha1 = "9c405847cc7ecda2dc921ccf18b47ca150d7317e"
uuid = "31c24e10-a181-5473-b8eb-7969acd0382f"
version = "0.25.109"
[deps.Distributions.extensions]
DistributionsChainRulesCoreExt = "ChainRulesCore"
DistributionsDensityInterfaceExt = "DensityInterface"
DistributionsTestExt = "Test"
[deps.Distributions.weakdeps]
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
DensityInterface = "b429d917-457f-4dbc-8f4c-0cc954292b1d"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
[[deps.DocStringExtensions]]
deps = ["LibGit2"]
git-tree-sha1 = "2fb1e02f2b635d0845df5d7c167fec4dd739b00d"
uuid = "ffbed154-4ef7-542d-bbb7-c09d3a79fcae"
version = "0.9.3"
[[deps.Downloads]]
deps = ["ArgTools", "FileWatching", "LibCURL", "NetworkOptions"]
uuid = "f43a241f-c20a-4ad4-852c-f6b1247861c6"
version = "1.6.0"
[[deps.DualNumbers]]
deps = ["Calculus", "NaNMath", "SpecialFunctions"]
git-tree-sha1 = "5837a837389fccf076445fce071c8ddaea35a566"
uuid = "fa6b7ba4-c1ee-5f82-b5fc-ecf0adba8f74"
version = "0.6.8"
[[deps.EarCut_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "e3290f2d49e661fbd94046d7e3726ffcb2d41053"
uuid = "5ae413db-bbd1-5e63-b57d-d24a61df00f5"
version = "2.2.4+0"
[[deps.EnumX]]
git-tree-sha1 = "bdb1942cd4c45e3c678fd11569d5cccd80976237"
uuid = "4e289a0a-7415-4d19-859d-a7e5c4648b56"
version = "1.0.4"
[[deps.ExactPredicates]]
deps = ["IntervalArithmetic", "Random", "StaticArrays"]
git-tree-sha1 = "b3f2ff58735b5f024c392fde763f29b057e4b025"
uuid = "429591f6-91af-11e9-00e2-59fbe8cec110"
version = "2.2.8"
[[deps.Expat_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "1c6317308b9dc757616f0b5cb379db10494443a7"
uuid = "2e619515-83b5-522b-bb60-26c02a35a201"
version = "2.6.2+0"
[[deps.Extents]]
git-tree-sha1 = "2140cd04483da90b2da7f99b2add0750504fc39c"
uuid = "411431e0-e8b7-467b-b5e0-f676ba4f2910"
version = "0.1.2"
[[deps.FFMPEG_jll]]
deps = ["Artifacts", "Bzip2_jll", "FreeType2_jll", "FriBidi_jll", "JLLWrappers", "LAME_jll", "Libdl", "Ogg_jll", "OpenSSL_jll", "Opus_jll", "PCRE2_jll", "Zlib_jll", "libaom_jll", "libass_jll", "libfdk_aac_jll", "libvorbis_jll", "x264_jll", "x265_jll"]
git-tree-sha1 = "ab3f7e1819dba9434a3a5126510c8fda3a4e7000"
uuid = "b22a6f82-2f65-5046-a5b2-351ab43fb4e5"
version = "6.1.1+0"
[[deps.FFTW]]
deps = ["AbstractFFTs", "FFTW_jll", "LinearAlgebra", "MKL_jll", "Preferences", "Reexport"]
git-tree-sha1 = "4820348781ae578893311153d69049a93d05f39d"
uuid = "7a1cc6ca-52ef-59f5-83cd-3a7055c09341"
version = "1.8.0"
[[deps.FFTW_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "c6033cc3892d0ef5bb9cd29b7f2f0331ea5184ea"
uuid = "f5851436-0d7a-5f13-b9de-f02708fd171a"
version = "3.3.10+0"
[[deps.FileIO]]
deps = ["Pkg", "Requires", "UUIDs"]
git-tree-sha1 = "82d8afa92ecf4b52d78d869f038ebfb881267322"
uuid = "5789e2e9-d7fb-5bc7-8068-2c6fae9b9549"
version = "1.16.3"
[[deps.FilePaths]]
deps = ["FilePathsBase", "MacroTools", "Reexport", "Requires"]
git-tree-sha1 = "919d9412dbf53a2e6fe74af62a73ceed0bce0629"
uuid = "8fc22ac5-c921-52a6-82fd-178b2807b824"
version = "0.8.3"
[[deps.FilePathsBase]]
deps = ["Compat", "Dates", "Mmap", "Printf", "Test", "UUIDs"]
git-tree-sha1 = "9f00e42f8d99fdde64d40c8ea5d14269a2e2c1aa"
uuid = "48062228-2e41-5def-b9a4-89aafe57970f"
version = "0.9.21"
[[deps.FileWatching]]
uuid = "7b1f6079-737a-58dc-b8bc-7a2ca5c1b5ee"
[[deps.FillArrays]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "0653c0a2396a6da5bc4766c43041ef5fd3efbe57"
uuid = "1a297f60-69ca-5386-bcde-b61e274b549b"
version = "1.11.0"
weakdeps = ["PDMats", "SparseArrays", "Statistics"]
[deps.FillArrays.extensions]
FillArraysPDMatsExt = "PDMats"
FillArraysSparseArraysExt = "SparseArrays"
FillArraysStatisticsExt = "Statistics"
[[deps.FixedPointNumbers]]
deps = ["Statistics"]
git-tree-sha1 = "05882d6995ae5c12bb5f36dd2ed3f61c98cbb172"
uuid = "53c48c17-4a7d-5ca2-90c5-79b7896eea93"
version = "0.8.5"
[[deps.Fontconfig_jll]]
deps = ["Artifacts", "Bzip2_jll", "Expat_jll", "FreeType2_jll", "JLLWrappers", "Libdl", "Libuuid_jll", "Zlib_jll"]
git-tree-sha1 = "db16beca600632c95fc8aca29890d83788dd8b23"
uuid = "a3f928ae-7b40-5064-980b-68af3947d34b"
version = "2.13.96+0"
[[deps.Format]]
git-tree-sha1 = "9c68794ef81b08086aeb32eeaf33531668d5f5fc"
uuid = "1fa38f19-a742-5d3f-a2b9-30dd87b9d5f8"
version = "1.3.7"
[[deps.ForwardDiff]]
deps = ["CommonSubexpressions", "DiffResults", "DiffRules", "LinearAlgebra", "LogExpFunctions", "NaNMath", "Preferences", "Printf", "Random", "SpecialFunctions"]
git-tree-sha1 = "cf0fe81336da9fb90944683b8c41984b08793dad"
uuid = "f6369f11-7733-5829-9624-2563aa707210"
version = "0.10.36"
weakdeps = ["StaticArrays"]
[deps.ForwardDiff.extensions]
ForwardDiffStaticArraysExt = "StaticArrays"
[[deps.FreeType]]
deps = ["CEnum", "FreeType2_jll"]
git-tree-sha1 = "907369da0f8e80728ab49c1c7e09327bf0d6d999"
uuid = "b38be410-82b0-50bf-ab77-7b57e271db43"
version = "4.1.1"
[[deps.FreeType2_jll]]
deps = ["Artifacts", "Bzip2_jll", "JLLWrappers", "Libdl", "Zlib_jll"]
git-tree-sha1 = "5c1d8ae0efc6c2e7b1fc502cbe25def8f661b7bc"
uuid = "d7e528f0-a631-5988-bf34-fe36492bcfd7"
version = "2.13.2+0"
[[deps.FreeTypeAbstraction]]
deps = ["ColorVectorSpace", "Colors", "FreeType", "GeometryBasics"]
git-tree-sha1 = "2493cdfd0740015955a8e46de4ef28f49460d8bc"
uuid = "663a7486-cb36-511b-a19d-713bb74d65c9"
version = "0.10.3"
[[deps.FriBidi_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "1ed150b39aebcc805c26b93a8d0122c940f64ce2"
uuid = "559328eb-81f9-559d-9380-de523a88c83c"
version = "1.0.14+0"
[[deps.GCPDecompositions]]
deps = ["Compat", "ForwardDiff", "IntervalSets", "LBFGSB", "LinearAlgebra", "Random"]
git-tree-sha1 = "c29269f9bd5f2617e517fa26fc1ea5f314efa486"
uuid = "f59fb95b-1bc8-443b-b347-5e445a549f37"
version = "0.2.0"
[deps.GCPDecompositions.extensions]
LossFunctionsExt = "LossFunctions"
[deps.GCPDecompositions.weakdeps]
LossFunctions = "30fc2ffe-d236-52d8-8643-a9d8f7c094a7"
[[deps.GeoInterface]]
deps = ["Extents"]
git-tree-sha1 = "801aef8228f7f04972e596b09d4dba481807c913"
uuid = "cf35fbd7-0cd7-5166-be24-54bfbe79505f"
version = "1.3.4"
[[deps.GeometryBasics]]
deps = ["EarCut_jll", "Extents", "GeoInterface", "IterTools", "LinearAlgebra", "StaticArrays", "StructArrays", "Tables"]
git-tree-sha1 = "b62f2b2d76cee0d61a2ef2b3118cd2a3215d3134"
uuid = "5c1252a2-5f33-56bf-86c9-59e7332b4326"
version = "0.4.11"
[[deps.Gettext_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Libiconv_jll", "Pkg", "XML2_jll"]
git-tree-sha1 = "9b02998aba7bf074d14de89f9d37ca24a1a0b046"
uuid = "78b55507-aeef-58d4-861c-77aaff3498b1"
version = "0.21.0+0"
[[deps.Glib_jll]]
deps = ["Artifacts", "Gettext_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Libiconv_jll", "Libmount_jll", "PCRE2_jll", "Zlib_jll"]
git-tree-sha1 = "7c82e6a6cd34e9d935e9aa4051b66c6ff3af59ba"
uuid = "7746bdde-850d-59dc-9ae8-88ece973131d"
version = "2.80.2+0"
[[deps.Graphics]]
deps = ["Colors", "LinearAlgebra", "NaNMath"]
git-tree-sha1 = "d61890399bc535850c4bf08e4e0d3a7ad0f21cbd"
uuid = "a2bd30eb-e257-5431-a919-1863eab51364"
version = "1.1.2"
[[deps.Graphite2_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "344bf40dcab1073aca04aa0df4fb092f920e4011"
uuid = "3b182d85-2403-5c21-9c21-1e1f0cc25472"
version = "1.3.14+0"
[[deps.GridLayoutBase]]
deps = ["GeometryBasics", "InteractiveUtils", "Observables"]
git-tree-sha1 = "fc713f007cff99ff9e50accba6373624ddd33588"
uuid = "3955a311-db13-416c-9275-1d80ed98e5e9"
version = "0.11.0"
[[deps.Grisu]]
git-tree-sha1 = "53bb909d1151e57e2484c3d1b53e19552b887fb2"
uuid = "42e2da0e-8278-4e71-bc24-59509adca0fe"
version = "1.0.2"
[[deps.HDF5]]
deps = ["Compat", "HDF5_jll", "Libdl", "MPIPreferences", "Mmap", "Preferences", "Printf", "Random", "Requires", "UUIDs"]
git-tree-sha1 = "e856eef26cf5bf2b0f95f8f4fc37553c72c8641c"
uuid = "f67ccb44-e63f-5c2f-98bd-6dc0ccc4ba2f"
version = "0.17.2"
[deps.HDF5.extensions]
MPIExt = "MPI"
[deps.HDF5.weakdeps]
MPI = "da04e1cc-30fd-572f-bb4f-1f8673147195"
[[deps.HDF5_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "LLVMOpenMP_jll", "LazyArtifacts", "LibCURL_jll", "Libdl", "MPICH_jll", "MPIPreferences", "MPItrampoline_jll", "MicrosoftMPI_jll", "OpenMPI_jll", "OpenSSL_jll", "TOML", "Zlib_jll", "libaec_jll"]
git-tree-sha1 = "38c8874692d48d5440d5752d6c74b0c6b0b60739"
uuid = "0234f1f7-429e-5d53-9886-15a909be8d59"
version = "1.14.2+1"
[[deps.HarfBuzz_jll]]
deps = ["Artifacts", "Cairo_jll", "Fontconfig_jll", "FreeType2_jll", "Glib_jll", "Graphite2_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Pkg"]
git-tree-sha1 = "129acf094d168394e80ee1dc4bc06ec835e510a3"
uuid = "2e76f6c2-a576-52d4-95c1-20adfe4de566"
version = "2.8.1+1"
[[deps.Hwloc_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "ca0f6bf568b4bfc807e7537f081c81e35ceca114"
uuid = "e33a78d0-f292-5ffc-b300-72abe9b543c8"
version = "2.10.0+0"
[[deps.HypergeometricFunctions]]
deps = ["DualNumbers", "LinearAlgebra", "OpenLibm_jll", "SpecialFunctions"]
git-tree-sha1 = "f218fe3736ddf977e0e772bc9a586b2383da2685"
uuid = "34004b35-14d8-5ef3-9330-4cdb6864b03a"
version = "0.3.23"
[[deps.ImageAxes]]
deps = ["AxisArrays", "ImageBase", "ImageCore", "Reexport", "SimpleTraits"]
git-tree-sha1 = "2e4520d67b0cef90865b3ef727594d2a58e0e1f8"
uuid = "2803e5a7-5153-5ecf-9a86-9b4c37f5f5ac"
version = "0.6.11"
[[deps.ImageBase]]
deps = ["ImageCore", "Reexport"]
git-tree-sha1 = "eb49b82c172811fd2c86759fa0553a2221feb909"
uuid = "c817782e-172a-44cc-b673-b171935fbb9e"
version = "0.1.7"
[[deps.ImageCore]]
deps = ["ColorVectorSpace", "Colors", "FixedPointNumbers", "MappedArrays", "MosaicViews", "OffsetArrays", "PaddedViews", "PrecompileTools", "Reexport"]
git-tree-sha1 = "b2a7eaa169c13f5bcae8131a83bc30eff8f71be0"
uuid = "a09fc81d-aa75-5fe9-8630-4744c3626534"
version = "0.10.2"
[[deps.ImageIO]]
deps = ["FileIO", "IndirectArrays", "JpegTurbo", "LazyModules", "Netpbm", "OpenEXR", "PNGFiles", "QOI", "Sixel", "TiffImages", "UUIDs"]
git-tree-sha1 = "437abb322a41d527c197fa800455f79d414f0a3c"
uuid = "82e4d734-157c-48bb-816b-45c225c6df19"
version = "0.6.8"
[[deps.ImageMetadata]]
deps = ["AxisArrays", "ImageAxes", "ImageBase", "ImageCore"]
git-tree-sha1 = "355e2b974f2e3212a75dfb60519de21361ad3cb7"
uuid = "bc367c6b-8a6b-528e-b4bd-a4b897500b49"
version = "0.9.9"
[[deps.Imath_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "0936ba688c6d201805a83da835b55c61a180db52"
uuid = "905a6f67-0a94-5f89-b386-d35d92009cd1"
version = "3.1.11+0"
[[deps.IndirectArrays]]
git-tree-sha1 = "012e604e1c7458645cb8b436f8fba789a51b257f"
uuid = "9b13fd28-a010-5f03-acff-a1bbcff69959"
version = "1.0.0"
[[deps.Inflate]]
git-tree-sha1 = "d1b1b796e47d94588b3757fe84fbf65a5ec4a80d"
uuid = "d25df0c9-e2be-5dd7-82c8-3ad0b3e990b9"
version = "0.1.5"
[[deps.IntelOpenMP_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "be50fe8df3acbffa0274a744f1a99d29c45a57f4"
uuid = "1d5cc7b8-4909-519e-a0f8-d0f5ad9712d0"
version = "2024.1.0+0"
[[deps.InteractiveUtils]]
deps = ["Markdown"]
uuid = "b77e0a4c-d291-57a0-90e8-8db25a27a240"
[[deps.Interpolations]]
deps = ["Adapt", "AxisAlgorithms", "ChainRulesCore", "LinearAlgebra", "OffsetArrays", "Random", "Ratios", "Requires", "SharedArrays", "SparseArrays", "StaticArrays", "WoodburyMatrices"]
git-tree-sha1 = "88a101217d7cb38a7b481ccd50d21876e1d1b0e0"
uuid = "a98d9a8b-a2ab-59e6-89dd-64a1c18fca59"
version = "0.15.1"
weakdeps = ["Unitful"]
[deps.Interpolations.extensions]
InterpolationsUnitfulExt = "Unitful"
[[deps.IntervalArithmetic]]
deps = ["CRlibm_jll", "MacroTools", "RoundingEmulator"]
git-tree-sha1 = "433b0bb201cd76cb087b017e49244f10394ebe9c"
uuid = "d1acc4aa-44c8-5952-acd4-ba5d80a2a253"
version = "0.22.14"
[deps.IntervalArithmetic.extensions]
IntervalArithmeticDiffRulesExt = "DiffRules"
IntervalArithmeticForwardDiffExt = "ForwardDiff"
IntervalArithmeticRecipesBaseExt = "RecipesBase"
[deps.IntervalArithmetic.weakdeps]
DiffRules = "b552c78f-8df3-52c6-915a-8e097449b14b"
ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210"
RecipesBase = "3cdcf5f2-1ef4-517c-9805-6587b60abb01"
[[deps.IntervalSets]]
git-tree-sha1 = "dba9ddf07f77f60450fe5d2e2beb9854d9a49bd0"
uuid = "8197267c-284f-5f27-9208-e0e47529a953"
version = "0.7.10"
[deps.IntervalSets.extensions]
IntervalSetsRandomExt = "Random"
IntervalSetsRecipesBaseExt = "RecipesBase"
IntervalSetsStatisticsExt = "Statistics"
[deps.IntervalSets.weakdeps]
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
RecipesBase = "3cdcf5f2-1ef4-517c-9805-6587b60abb01"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
[[deps.IrrationalConstants]]
git-tree-sha1 = "630b497eafcc20001bba38a4651b327dcfc491d2"
uuid = "92d709cd-6900-40b7-9082-c6be49f344b6"
version = "0.2.2"
[[deps.Isoband]]
deps = ["isoband_jll"]
git-tree-sha1 = "f9b6d97355599074dc867318950adaa6f9946137"
uuid = "f1662d9f-8043-43de-a69a-05efc1cc6ff4"
version = "0.1.1"
[[deps.IterTools]]
git-tree-sha1 = "42d5f897009e7ff2cf88db414a389e5ed1bdd023"
uuid = "c8e1da08-722c-5040-9ed9-7db0dc04731e"
version = "1.10.0"
[[deps.IteratorInterfaceExtensions]]
git-tree-sha1 = "a3f24677c21f5bbe9d2a714f95dcd58337fb2856"
uuid = "82899510-4779-5014-852e-03e436cf321d"
version = "1.0.0"
[[deps.JLLWrappers]]
deps = ["Artifacts", "Preferences"]
git-tree-sha1 = "7e5d6779a1e09a36db2a7b6cff50942a0a7d0fca"
uuid = "692b3bcd-3c85-4b1f-b108-f13ce0eb3210"
version = "1.5.0"
[[deps.JSON]]
deps = ["Dates", "Mmap", "Parsers", "Unicode"]
git-tree-sha1 = "31e996f0a15c7b280ba9f76636b3ff9e2ae58c9a"
uuid = "682c06a0-de6a-54ab-a142-c8b1cf79cde6"
version = "0.21.4"
[[deps.JpegTurbo]]
deps = ["CEnum", "FileIO", "ImageCore", "JpegTurbo_jll", "TOML"]
git-tree-sha1 = "fa6d0bcff8583bac20f1ffa708c3913ca605c611"
uuid = "b835a17e-a41a-41e7-81f0-2f016b05efe0"
version = "0.1.5"
[[deps.JpegTurbo_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "c84a835e1a09b289ffcd2271bf2a337bbdda6637"
uuid = "aacddb02-875f-59d6-b918-886e6ef4fbf8"
version = "3.0.3+0"
[[deps.KernelDensity]]
deps = ["Distributions", "DocStringExtensions", "FFTW", "Interpolations", "StatsBase"]
git-tree-sha1 = "7d703202e65efa1369de1279c162b915e245eed1"
uuid = "5ab0869b-81aa-558d-bb23-cbf5423bbe9b"
version = "0.6.9"
[[deps.LAME_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "170b660facf5df5de098d866564877e119141cbd"
uuid = "c1c5ebd0-6772-5130-a774-d5fcae4a789d"
version = "3.100.2+0"
[[deps.LBFGSB]]
deps = ["L_BFGS_B_jll"]
git-tree-sha1 = "e2e6f53ee20605d0ea2be473480b7480bd5091b5"
uuid = "5be7bae1-8223-5378-bac3-9e7378a2f6e6"
version = "0.4.1"
[[deps.LLVMOpenMP_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "d986ce2d884d49126836ea94ed5bfb0f12679713"
uuid = "1d63c593-3942-5779-bab2-d838dc0a180e"
version = "15.0.7+0"
[[deps.LZO_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "70c5da094887fd2cae843b8db33920bac4b6f07d"
uuid = "dd4b983a-f0e5-5f8d-a1b7-129d4a5fb1ac"
version = "2.10.2+0"
[[deps.L_BFGS_B_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "77feda930ed3f04b2b0fbb5bea89e69d3677c6b0"
uuid = "81d17ec3-03a1-5e46-b53e-bddc35a13473"
version = "3.0.1+0"
[[deps.LaTeXStrings]]
git-tree-sha1 = "50901ebc375ed41dbf8058da26f9de442febbbec"
uuid = "b964fa9f-0449-5b57-a5c2-d3ea65f4040f"
version = "1.3.1"
[[deps.LazyArtifacts]]
deps = ["Artifacts", "Pkg"]
uuid = "4af54fe1-eca0-43a8-85a7-787d91b784e3"
[[deps.LazyModules]]
git-tree-sha1 = "a560dd966b386ac9ae60bdd3a3d3a326062d3c3e"
uuid = "8cdb02fc-e678-4876-92c5-9defec4f444e"
version = "0.3.1"
[[deps.LibCURL]]
deps = ["LibCURL_jll", "MozillaCACerts_jll"]
uuid = "b27032c2-a3e7-50c8-80cd-2d36dbcbfd21"
version = "0.6.4"
[[deps.LibCURL_jll]]
deps = ["Artifacts", "LibSSH2_jll", "Libdl", "MbedTLS_jll", "Zlib_jll", "nghttp2_jll"]
uuid = "deac9b47-8bc7-5906-a0fe-35ac56dc84c0"
version = "8.4.0+0"
[[deps.LibGit2]]
deps = ["Base64", "LibGit2_jll", "NetworkOptions", "Printf", "SHA"]
uuid = "76f85450-5226-5b5a-8eaa-529ad045b433"
[[deps.LibGit2_jll]]
deps = ["Artifacts", "LibSSH2_jll", "Libdl", "MbedTLS_jll"]
uuid = "e37daf67-58a4-590a-8e99-b0245dd2ffc5"
version = "1.6.4+0"
[[deps.LibSSH2_jll]]
deps = ["Artifacts", "Libdl", "MbedTLS_jll"]
uuid = "29816b5a-b9ab-546f-933c-edad1886dfa8"
version = "1.11.0+1"
[[deps.Libdl]]
uuid = "8f399da3-3557-5675-b5ff-fb832c97cbdb"
[[deps.Libffi_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "0b4a5d71f3e5200a7dff793393e09dfc2d874290"
uuid = "e9f186c6-92d2-5b65-8a66-fee21dc1b490"
version = "3.2.2+1"
[[deps.Libgcrypt_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Libgpg_error_jll"]
git-tree-sha1 = "9fd170c4bbfd8b935fdc5f8b7aa33532c991a673"
uuid = "d4300ac3-e22c-5743-9152-c294e39db1e4"
version = "1.8.11+0"
[[deps.Libgpg_error_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "fbb1f2bef882392312feb1ede3615ddc1e9b99ed"
uuid = "7add5ba3-2f88-524e-9cd5-f83b8a55f7b8"
version = "1.49.0+0"
[[deps.Libiconv_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "f9557a255370125b405568f9767d6d195822a175"
uuid = "94ce4f54-9a6c-5748-9c1c-f9c7231a4531"
version = "1.17.0+0"
[[deps.Libmount_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "0c4f9c4f1a50d8f35048fa0532dabbadf702f81e"
uuid = "4b2f31a3-9ecc-558c-b454-b3730dcb73e9"
version = "2.40.1+0"
[[deps.Libuuid_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "5ee6203157c120d79034c748a2acba45b82b8807"
uuid = "38a345b3-de98-5d2b-a5d3-14cd9215e700"
version = "2.40.1+0"
[[deps.LinearAlgebra]]
deps = ["Libdl", "OpenBLAS_jll", "libblastrampoline_jll"]
uuid = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
[[deps.LogExpFunctions]]
deps = ["DocStringExtensions", "IrrationalConstants", "LinearAlgebra"]
git-tree-sha1 = "a2d09619db4e765091ee5c6ffe8872849de0feea"
uuid = "2ab3a3ac-af41-5b50-aa03-7779005ae688"
version = "0.3.28"
[deps.LogExpFunctions.extensions]
LogExpFunctionsChainRulesCoreExt = "ChainRulesCore"
LogExpFunctionsChangesOfVariablesExt = "ChangesOfVariables"
LogExpFunctionsInverseFunctionsExt = "InverseFunctions"
[deps.LogExpFunctions.weakdeps]
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
ChangesOfVariables = "9e997f8a-9a97-42d5-a9f1-ce6bfc15e2c0"
InverseFunctions = "3587e190-3f89-42d0-90ee-14403ec27112"
[[deps.Logging]]
uuid = "56ddb016-857b-54e1-b83d-db4d58db5568"
[[deps.MAT]]
deps = ["BufferedStreams", "CodecZlib", "HDF5", "SparseArrays"]
git-tree-sha1 = "1d2dd9b186742b0f317f2530ddcbf00eebb18e96"
uuid = "23992714-dd62-5051-b70f-ba57cb901cac"
version = "0.10.7"
[[deps.MKL_jll]]
deps = ["Artifacts", "IntelOpenMP_jll", "JLLWrappers", "LazyArtifacts", "Libdl", "oneTBB_jll"]
git-tree-sha1 = "80b2833b56d466b3858d565adcd16a4a05f2089b"
uuid = "856f044c-d86e-5d09-b602-aeab76dc8ba7"
version = "2024.1.0+0"
[[deps.MPICH_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "Hwloc_jll", "JLLWrappers", "LazyArtifacts", "Libdl", "MPIPreferences", "TOML"]
git-tree-sha1 = "4099bb6809ac109bfc17d521dad33763bcf026b7"
uuid = "7cb0a576-ebde-5e09-9194-50597f1243b4"
version = "4.2.1+1"
[[deps.MPIPreferences]]
deps = ["Libdl", "Preferences"]
git-tree-sha1 = "c105fe467859e7f6e9a852cb15cb4301126fac07"
uuid = "3da0fdf6-3ccc-4f1b-acd9-58baa6c99267"
version = "0.1.11"
[[deps.MPItrampoline_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "LazyArtifacts", "Libdl", "MPIPreferences", "TOML"]
git-tree-sha1 = "8c35d5420193841b2f367e658540e8d9e0601ed0"
uuid = "f1f71cc9-e9ae-5b93-9b94-4fe0e1ad3748"
version = "5.4.0+0"
[[deps.MacroTools]]
deps = ["Markdown", "Random"]
git-tree-sha1 = "2fa9ee3e63fd3a4f7a9a4f4744a52f4856de82df"
uuid = "1914dd2f-81c6-5fcd-8719-6d5c9610ff09"
version = "0.5.13"
[[deps.Makie]]
deps = ["Animations", "Base64", "CRC32c", "ColorBrewer", "ColorSchemes", "ColorTypes", "Colors", "Contour", "Dates", "DelaunayTriangulation", "Distributions", "DocStringExtensions", "Downloads", "FFMPEG_jll", "FileIO", "FilePaths", "FixedPointNumbers", "Format", "FreeType", "FreeTypeAbstraction", "GeometryBasics", "GridLayoutBase", "ImageIO", "InteractiveUtils", "IntervalSets", "Isoband", "KernelDensity", "LaTeXStrings", "LinearAlgebra", "MacroTools", "MakieCore", "Markdown", "MathTeXEngine", "Observables", "OffsetArrays", "Packing", "PlotUtils", "PolygonOps", "PrecompileTools", "Printf", "REPL", "Random", "RelocatableFolders", "Scratch", "ShaderAbstractions", "Showoff", "SignedDistanceFields", "SparseArrays", "Statistics", "StatsBase", "StatsFuns", "StructArrays", "TriplotBase", "UnicodeFun", "Unitful"]
git-tree-sha1 = "ec3a60c9de787bc6ef119d13e07d4bfacceebb83"
uuid = "ee78f7c6-11fb-53f2-987a-cfe4a2b5a57a"
version = "0.21.2"
[[deps.MakieCore]]
deps = ["ColorTypes", "GeometryBasics", "IntervalSets", "Observables"]
git-tree-sha1 = "c1c9da1a69f6c635a60581c98da252958c844d70"
uuid = "20f20a25-4f0e-4fdf-b5d1-57303727442b"
version = "0.8.2"
[[deps.MappedArrays]]
git-tree-sha1 = "2dab0221fe2b0f2cb6754eaa743cc266339f527e"
uuid = "dbb5928d-eab1-5f90-85c2-b9b0edb7c900"
version = "0.4.2"
[[deps.Markdown]]
deps = ["Base64"]
uuid = "d6f4376e-aef5-505a-96c1-9c027394607a"
[[deps.MathTeXEngine]]
deps = ["AbstractTrees", "Automa", "DataStructures", "FreeTypeAbstraction", "GeometryBasics", "LaTeXStrings", "REPL", "RelocatableFolders", "UnicodeFun"]
git-tree-sha1 = "1865d0b8a2d91477c8b16b49152a32764c7b1f5f"
uuid = "0a4f8689-d25c-4efe-a92b-7142dfc1aa53"
version = "0.6.0"
[[deps.MbedTLS_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "c8ffd9c3-330d-5841-b78e-0817d7145fa1"
version = "2.28.2+1"
[[deps.MicrosoftMPI_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "f12a29c4400ba812841c6ace3f4efbb6dbb3ba01"
uuid = "9237b28f-5490-5468-be7b-bb81f5f5e6cf"
version = "10.1.4+2"
[[deps.Missings]]
deps = ["DataAPI"]
git-tree-sha1 = "ec4f7fbeab05d7747bdf98eb74d130a2a2ed298d"
uuid = "e1d29d7a-bbdc-5cf2-9ac0-f12de2c33e28"
version = "1.2.0"
[[deps.Mmap]]
uuid = "a63ad114-7e13-5084-954f-fe012c677804"
[[deps.MosaicViews]]
deps = ["MappedArrays", "OffsetArrays", "PaddedViews", "StackViews"]
git-tree-sha1 = "7b86a5d4d70a9f5cdf2dacb3cbe6d251d1a61dbe"
uuid = "e94cdb99-869f-56ef-bcf0-1ae2bcbe0389"
version = "0.3.4"
[[deps.MozillaCACerts_jll]]
uuid = "14a3606d-f60d-562e-9121-12d972cd8159"
version = "2023.1.10"
[[deps.NaNMath]]
deps = ["OpenLibm_jll"]
git-tree-sha1 = "0877504529a3e5c3343c6f8b4c0381e57e4387e4"
uuid = "77ba4419-2d1f-58cd-9bb1-8ffee604a2e3"
version = "1.0.2"
[[deps.Netpbm]]
deps = ["FileIO", "ImageCore", "ImageMetadata"]
git-tree-sha1 = "d92b107dbb887293622df7697a2223f9f8176fcd"
uuid = "f09324ee-3d7c-5217-9330-fc30815ba969"
version = "1.1.1"
[[deps.NetworkOptions]]
uuid = "ca575930-c2e3-43a9-ace4-1e988b2c1908"
version = "1.2.0"
[[deps.Observables]]
git-tree-sha1 = "7438a59546cf62428fc9d1bc94729146d37a7225"
uuid = "510215fc-4207-5dde-b226-833fc4488ee2"
version = "0.5.5"
[[deps.OffsetArrays]]
git-tree-sha1 = "e64b4f5ea6b7389f6f046d13d4896a8f9c1ba71e"
uuid = "6fe1bfb0-de20-5000-8ca7-80f57d26f881"
version = "1.14.0"
weakdeps = ["Adapt"]
[deps.OffsetArrays.extensions]
OffsetArraysAdaptExt = "Adapt"
[[deps.Ogg_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "887579a3eb005446d514ab7aeac5d1d027658b8f"
uuid = "e7412a2a-1a6e-54c0-be00-318e2571c051"
version = "1.3.5+1"
[[deps.OpenBLAS_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "Libdl"]
uuid = "4536629a-c528-5b80-bd46-f80d51c5b363"
version = "0.3.23+4"
[[deps.OpenEXR]]
deps = ["Colors", "FileIO", "OpenEXR_jll"]
git-tree-sha1 = "327f53360fdb54df7ecd01e96ef1983536d1e633"
uuid = "52e1d378-f018-4a11-a4be-720524705ac7"
version = "0.3.2"
[[deps.OpenEXR_jll]]
deps = ["Artifacts", "Imath_jll", "JLLWrappers", "Libdl", "Zlib_jll"]
git-tree-sha1 = "8292dd5c8a38257111ada2174000a33745b06d4e"
uuid = "18a262bb-aa17-5467-a713-aee519bc75cb"
version = "3.2.4+0"
[[deps.OpenLibm_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "05823500-19ac-5b8b-9628-191a04bc5112"
version = "0.8.1+2"
[[deps.OpenMPI_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "Hwloc_jll", "JLLWrappers", "LazyArtifacts", "Libdl", "MPIPreferences", "TOML", "Zlib_jll"]
git-tree-sha1 = "a9de2f1fc98b92f8856c640bf4aec1ac9b2a0d86"
uuid = "fe0851c0-eecd-5654-98d4-656369965a5c"
version = "5.0.3+0"
[[deps.OpenSSL_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "a028ee3cb5641cccc4c24e90c36b0a4f7707bdf5"
uuid = "458c3c95-2e84-50aa-8efc-19380b2a3a95"
version = "3.0.14+0"
[[deps.OpenSpecFun_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "13652491f6856acfd2db29360e1bbcd4565d04f1"
uuid = "efe28fd5-8261-553b-a9e1-b2916fc3738e"
version = "0.5.5+0"
[[deps.Opus_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "51a08fb14ec28da2ec7a927c4337e4332c2a4720"
uuid = "91d4177d-7536-5919-b921-800302f37372"
version = "1.3.2+0"
[[deps.OrderedCollections]]
git-tree-sha1 = "dfdf5519f235516220579f949664f1bf44e741c5"
uuid = "bac558e1-5e72-5ebc-8fee-abe8a469f55d"
version = "1.6.3"
[[deps.PCRE2_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "efcefdf7-47ab-520b-bdef-62a2eaa19f15"
version = "10.42.0+1"
[[deps.PDMats]]
deps = ["LinearAlgebra", "SparseArrays", "SuiteSparse"]
git-tree-sha1 = "949347156c25054de2db3b166c52ac4728cbad65"
uuid = "90014a1f-27ba-587c-ab20-58faa44d9150"
version = "0.11.31"
[[deps.PNGFiles]]
deps = ["Base64", "CEnum", "ImageCore", "IndirectArrays", "OffsetArrays", "libpng_jll"]
git-tree-sha1 = "67186a2bc9a90f9f85ff3cc8277868961fb57cbd"
uuid = "f57f5aa1-a3ce-4bc8-8ab9-96f992907883"
version = "0.4.3"
[[deps.Packing]]
deps = ["GeometryBasics"]
git-tree-sha1 = "ec3edfe723df33528e085e632414499f26650501"
uuid = "19eb6ba3-879d-56ad-ad62-d5c202156566"
version = "0.5.0"
[[deps.PaddedViews]]
deps = ["OffsetArrays"]
git-tree-sha1 = "0fac6313486baae819364c52b4f483450a9d793f"
uuid = "5432bcbf-9aad-5242-b902-cca2824c8663"
version = "0.5.12"
[[deps.Pango_jll]]
deps = ["Artifacts", "Cairo_jll", "Fontconfig_jll", "FreeType2_jll", "FriBidi_jll", "Glib_jll", "HarfBuzz_jll", "JLLWrappers", "Libdl"]
git-tree-sha1 = "cb5a2ab6763464ae0f19c86c56c63d4a2b0f5bda"
uuid = "36c8627f-9965-5494-a995-c6b170f724f3"
version = "1.52.2+0"
[[deps.Parsers]]
deps = ["Dates", "PrecompileTools", "UUIDs"]
git-tree-sha1 = "8489905bcdbcfac64d1daa51ca07c0d8f0283821"
uuid = "69de0a69-1ddd-5017-9359-2bf0b02dc9f0"
version = "2.8.1"
[[deps.Pixman_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "LLVMOpenMP_jll", "Libdl"]
git-tree-sha1 = "35621f10a7531bc8fa58f74610b1bfb70a3cfc6b"
uuid = "30392449-352a-5448-841d-b1acce4e97dc"
version = "0.43.4+0"
[[deps.Pkg]]
deps = ["Artifacts", "Dates", "Downloads", "FileWatching", "LibGit2", "Libdl", "Logging", "Markdown", "Printf", "REPL", "Random", "SHA", "Serialization", "TOML", "Tar", "UUIDs", "p7zip_jll"]
uuid = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f"
version = "1.10.0"
[[deps.PkgVersion]]
deps = ["Pkg"]
git-tree-sha1 = "f9501cc0430a26bc3d156ae1b5b0c1b47af4d6da"
uuid = "eebad327-c553-4316-9ea0-9fa01ccd7688"
version = "0.3.3"
[[deps.PlotUtils]]
deps = ["ColorSchemes", "Colors", "Dates", "PrecompileTools", "Printf", "Random", "Reexport", "Statistics"]
git-tree-sha1 = "7b1a9df27f072ac4c9c7cbe5efb198489258d1f5"
uuid = "995b91a9-d308-5afd-9ec6-746e21dbc043"
version = "1.4.1"
[[deps.PolygonOps]]
git-tree-sha1 = "77b3d3605fc1cd0b42d95eba87dfcd2bf67d5ff6"
uuid = "647866c9-e3ac-4575-94e7-e3d426903924"
version = "0.1.2"
[[deps.PrecompileTools]]
deps = ["Preferences"]
git-tree-sha1 = "5aa36f7049a63a1528fe8f7c3f2113413ffd4e1f"
uuid = "aea7be01-6a6a-4083-8856-8a6e6704d82a"
version = "1.2.1"
[[deps.Preferences]]
deps = ["TOML"]
git-tree-sha1 = "9306f6085165d270f7e3db02af26a400d580f5c6"
uuid = "21216c6a-2e73-6563-6e65-726566657250"
version = "1.4.3"
[[deps.Printf]]
deps = ["Unicode"]
uuid = "de0858da-6303-5e67-8744-51eddeeeb8d7"
[[deps.ProgressMeter]]
deps = ["Distributed", "Printf"]
git-tree-sha1 = "763a8ceb07833dd51bb9e3bbca372de32c0605ad"
uuid = "92933f4c-e287-5a05-a399-4b506db050ca"
version = "1.10.0"
[[deps.PtrArrays]]
git-tree-sha1 = "f011fbb92c4d401059b2212c05c0601b70f8b759"
uuid = "43287f4e-b6f4-7ad1-bb20-aadabca52c3d"
version = "1.2.0"
[[deps.QOI]]
deps = ["ColorTypes", "FileIO", "FixedPointNumbers"]
git-tree-sha1 = "18e8f4d1426e965c7b532ddd260599e1510d26ce"
uuid = "4b34888f-f399-49d4-9bb3-47ed5cae4e65"
version = "1.0.0"
[[deps.QuadGK]]
deps = ["DataStructures", "LinearAlgebra"]
git-tree-sha1 = "9b23c31e76e333e6fb4c1595ae6afa74966a729e"
uuid = "1fd47b50-473d-5c70-9696-f719f8f3bcdc"
version = "2.9.4"
[[deps.REPL]]
deps = ["InteractiveUtils", "Markdown", "Sockets", "Unicode"]
uuid = "3fa0cd96-eef1-5676-8a61-b3b8758bbffb"
[[deps.Random]]
deps = ["SHA"]
uuid = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
[[deps.RangeArrays]]
git-tree-sha1 = "b9039e93773ddcfc828f12aadf7115b4b4d225f5"
uuid = "b3c3ace0-ae52-54e7-9d0b-2c1406fd6b9d"
version = "0.3.2"
[[deps.Ratios]]
deps = ["Requires"]
git-tree-sha1 = "1342a47bf3260ee108163042310d26f2be5ec90b"
uuid = "c84ed2f1-dad5-54f0-aa8e-dbefe2724439"
version = "0.4.5"
weakdeps = ["FixedPointNumbers"]
[deps.Ratios.extensions]
RatiosFixedPointNumbersExt = "FixedPointNumbers"
[[deps.Reexport]]
git-tree-sha1 = "45e428421666073eab6f2da5c9d310d99bb12f9b"
uuid = "189a3867-3050-52da-a836-e630ba90ab69"
version = "1.2.2"
[[deps.RelocatableFolders]]
deps = ["SHA", "Scratch"]
git-tree-sha1 = "ffdaf70d81cf6ff22c2b6e733c900c3321cab864"
uuid = "05181044-ff0b-4ac5-8273-598c1e38db00"
version = "1.0.1"
[[deps.Requires]]
deps = ["UUIDs"]
git-tree-sha1 = "838a3a4188e2ded87a4f9f184b4b0d78a1e91cb7"
uuid = "ae029012-a4dd-5104-9daa-d747884805df"
version = "1.3.0"
[[deps.Rmath]]
deps = ["Random", "Rmath_jll"]
git-tree-sha1 = "f65dcb5fa46aee0cf9ed6274ccbd597adc49aa7b"
uuid = "79098fc4-a85e-5d69-aa6a-4863f24498fa"
version = "0.7.1"
[[deps.Rmath_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "d483cd324ce5cf5d61b77930f0bbd6cb61927d21"
uuid = "f50d1b31-88e8-58de-be2c-1cc44531875f"
version = "0.4.2+0"
[[deps.RoundingEmulator]]
git-tree-sha1 = "40b9edad2e5287e05bd413a38f61a8ff55b9557b"
uuid = "5eaf0fd0-dfba-4ccb-bf02-d820a40db705"
version = "0.2.1"
[[deps.SHA]]
uuid = "ea8e919c-243c-51af-8825-aaa63cd721ce"
version = "0.7.0"
[[deps.SIMD]]
deps = ["PrecompileTools"]
git-tree-sha1 = "2803cab51702db743f3fda07dd1745aadfbf43bd"
uuid = "fdea26ae-647d-5447-a871-4b548cad5224"
version = "3.5.0"
[[deps.Scratch]]
deps = ["Dates"]
git-tree-sha1 = "3bac05bc7e74a75fd9cba4295cde4045d9fe2386"
uuid = "6c6a2e73-6563-6170-7368-637461726353"
version = "1.2.1"
[[deps.Serialization]]
uuid = "9e88b42a-f829-5b0c-bbe9-9e923198166b"
[[deps.ShaderAbstractions]]
deps = ["ColorTypes", "FixedPointNumbers", "GeometryBasics", "LinearAlgebra", "Observables", "StaticArrays", "StructArrays", "Tables"]
git-tree-sha1 = "79123bc60c5507f035e6d1d9e563bb2971954ec8"
uuid = "65257c39-d410-5151-9873-9b3e5be5013e"
version = "0.4.1"
[[deps.SharedArrays]]
deps = ["Distributed", "Mmap", "Random", "Serialization"]
uuid = "1a1011a3-84de-559e-8e89-a11a2f7dc383"
[[deps.Showoff]]
deps = ["Dates", "Grisu"]
git-tree-sha1 = "91eddf657aca81df9ae6ceb20b959ae5653ad1de"
uuid = "992d4aef-0814-514b-bc4d-f2e9a6c4116f"
version = "1.0.3"
[[deps.SignedDistanceFields]]
deps = ["Random", "Statistics", "Test"]
git-tree-sha1 = "d263a08ec505853a5ff1c1ebde2070419e3f28e9"
uuid = "73760f76-fbc4-59ce-8f25-708e95d2df96"
version = "0.4.0"
[[deps.SimpleTraits]]
deps = ["InteractiveUtils", "MacroTools"]
git-tree-sha1 = "5d7e3f4e11935503d3ecaf7186eac40602e7d231"
uuid = "699a6c99-e7fa-54fc-8d76-47d257e15c1d"
version = "0.9.4"
[[deps.Sixel]]
deps = ["Dates", "FileIO", "ImageCore", "IndirectArrays", "OffsetArrays", "REPL", "libsixel_jll"]
git-tree-sha1 = "2da10356e31327c7096832eb9cd86307a50b1eb6"
uuid = "45858cf5-a6b0-47a3-bbea-62219f50df47"
version = "0.1.3"
[[deps.Sockets]]
uuid = "6462fe0b-24de-5631-8697-dd941f90decc"
[[deps.SortingAlgorithms]]
deps = ["DataStructures"]
git-tree-sha1 = "66e0a8e672a0bdfca2c3f5937efb8538b9ddc085"
uuid = "a2af1166-a08f-5f64-846c-94a0d3cef48c"
version = "1.2.1"
[[deps.SparseArrays]]
deps = ["Libdl", "LinearAlgebra", "Random", "Serialization", "SuiteSparse_jll"]
uuid = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
version = "1.10.0"
[[deps.SpecialFunctions]]
deps = ["IrrationalConstants", "LogExpFunctions", "OpenLibm_jll", "OpenSpecFun_jll"]
git-tree-sha1 = "2f5d4697f21388cbe1ff299430dd169ef97d7e14"
uuid = "276daf66-3868-5448-9aa4-cd146d93841b"
version = "2.4.0"
weakdeps = ["ChainRulesCore"]
[deps.SpecialFunctions.extensions]
SpecialFunctionsChainRulesCoreExt = "ChainRulesCore"
[[deps.StackViews]]
deps = ["OffsetArrays"]
git-tree-sha1 = "46e589465204cd0c08b4bd97385e4fa79a0c770c"
uuid = "cae243ae-269e-4f55-b966-ac2d0dc13c15"
version = "0.1.1"
[[deps.StaticArrays]]
deps = ["LinearAlgebra", "PrecompileTools", "Random", "StaticArraysCore"]
git-tree-sha1 = "6e00379a24597be4ae1ee6b2d882e15392040132"
uuid = "90137ffa-7385-5640-81b9-e52037218182"
version = "1.9.5"
weakdeps = ["ChainRulesCore", "Statistics"]
[deps.StaticArrays.extensions]
StaticArraysChainRulesCoreExt = "ChainRulesCore"
StaticArraysStatisticsExt = "Statistics"
[[deps.StaticArraysCore]]
git-tree-sha1 = "192954ef1208c7019899fbf8049e717f92959682"
uuid = "1e83bf80-4336-4d27-bf5d-d5a4f845583c"
version = "1.4.3"
[[deps.Statistics]]
deps = ["LinearAlgebra", "SparseArrays"]
uuid = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
version = "1.10.0"
[[deps.StatsAPI]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "1ff449ad350c9c4cbc756624d6f8a8c3ef56d3ed"
uuid = "82ae8749-77ed-4fe6-ae5f-f523153014b0"
version = "1.7.0"
[[deps.StatsBase]]
deps = ["DataAPI", "DataStructures", "LinearAlgebra", "LogExpFunctions", "Missings", "Printf", "Random", "SortingAlgorithms", "SparseArrays", "Statistics", "StatsAPI"]
git-tree-sha1 = "5cf7606d6cef84b543b483848d4ae08ad9832b21"
uuid = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
version = "0.34.3"
[[deps.StatsFuns]]
deps = ["HypergeometricFunctions", "IrrationalConstants", "LogExpFunctions", "Reexport", "Rmath", "SpecialFunctions"]
git-tree-sha1 = "cef0472124fab0695b58ca35a77c6fb942fdab8a"
uuid = "4c63d2b9-4356-54db-8cca-17b64c39e42c"
version = "1.3.1"
[deps.StatsFuns.extensions]
StatsFunsChainRulesCoreExt = "ChainRulesCore"
StatsFunsInverseFunctionsExt = "InverseFunctions"
[deps.StatsFuns.weakdeps]
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
InverseFunctions = "3587e190-3f89-42d0-90ee-14403ec27112"
[[deps.StructArrays]]
deps = ["ConstructionBase", "DataAPI", "Tables"]
git-tree-sha1 = "f4dc295e983502292c4c3f951dbb4e985e35b3be"
uuid = "09ab397b-f2b6-538f-b94a-2f83cf4a842a"
version = "0.6.18"
[deps.StructArrays.extensions]
StructArraysAdaptExt = "Adapt"
StructArraysGPUArraysCoreExt = "GPUArraysCore"
StructArraysSparseArraysExt = "SparseArrays"
StructArraysStaticArraysExt = "StaticArrays"
[deps.StructArrays.weakdeps]
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
GPUArraysCore = "46192b85-c4d5-4398-a991-12ede77f4527"
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"
[[deps.SuiteSparse]]
deps = ["Libdl", "LinearAlgebra", "Serialization", "SparseArrays"]
uuid = "4607b0f0-06f3-5cda-b6b1-a6196a1729e9"
[[deps.SuiteSparse_jll]]
deps = ["Artifacts", "Libdl", "libblastrampoline_jll"]
uuid = "bea87d4a-7f5b-5778-9afe-8cc45184846c"
version = "7.2.1+1"
[[deps.TOML]]
deps = ["Dates"]
uuid = "fa267f1f-6049-4f14-aa54-33bafae1ed76"
version = "1.0.3"
[[deps.TableTraits]]
deps = ["IteratorInterfaceExtensions"]
git-tree-sha1 = "c06b2f539df1c6efa794486abfb6ed2022561a39"
uuid = "3783bdb8-4a98-5b6b-af9a-565f29a5fe9c"
version = "1.0.1"
[[deps.Tables]]
deps = ["DataAPI", "DataValueInterfaces", "IteratorInterfaceExtensions", "LinearAlgebra", "OrderedCollections", "TableTraits"]
git-tree-sha1 = "cb76cf677714c095e535e3501ac7954732aeea2d"
uuid = "bd369af6-aec1-5ad0-b16a-f7cc5008161c"
version = "1.11.1"
[[deps.Tar]]
deps = ["ArgTools", "SHA"]
uuid = "a4e569a6-e804-4fa4-b0f3-eef7a1d5b13e"
version = "1.10.0"
[[deps.TensorCore]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "1feb45f88d133a655e001435632f019a9a1bcdb6"
uuid = "62fd8b95-f654-4bbd-a8a5-9c27f68ccd50"
version = "0.1.1"
[[deps.Test]]
deps = ["InteractiveUtils", "Logging", "Random", "Serialization"]
uuid = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
[[deps.TiffImages]]
deps = ["ColorTypes", "DataStructures", "DocStringExtensions", "FileIO", "FixedPointNumbers", "IndirectArrays", "Inflate", "Mmap", "OffsetArrays", "PkgVersion", "ProgressMeter", "SIMD", "UUIDs"]
git-tree-sha1 = "bc7fd5c91041f44636b2c134041f7e5263ce58ae"
uuid = "731e570b-9d59-4bfa-96dc-6df516fadf69"
version = "0.10.0"
[[deps.TranscodingStreams]]
git-tree-sha1 = "a947ea21087caba0a798c5e494d0bb78e3a1a3a0"
uuid = "3bb67fe8-82b1-5028-8e26-92a6c54297fa"
version = "0.10.9"
weakdeps = ["Random", "Test"]
[deps.TranscodingStreams.extensions]
TestExt = ["Test", "Random"]
[[deps.TriplotBase]]
git-tree-sha1 = "4d4ed7f294cda19382ff7de4c137d24d16adc89b"
uuid = "981d1d27-644d-49a2-9326-4793e63143c3"
version = "0.1.0"
[[deps.UUIDs]]
deps = ["Random", "SHA"]
uuid = "cf7118a7-6976-5b1a-9a39-7adc72f591a4"
[[deps.Unicode]]
uuid = "4ec0a83e-493e-50e2-b9ac-8f72acf5a8f5"
[[deps.UnicodeFun]]
deps = ["REPL"]
git-tree-sha1 = "53915e50200959667e78a92a418594b428dffddf"
uuid = "1cfade01-22cf-5700-b092-accc4b62d6e1"
version = "0.4.1"
[[deps.Unitful]]
deps = ["Dates", "LinearAlgebra", "Random"]
git-tree-sha1 = "dd260903fdabea27d9b6021689b3cd5401a57748"
uuid = "1986cc42-f94f-5a68-af5c-568840ba703d"
version = "1.20.0"
[deps.Unitful.extensions]
ConstructionBaseUnitfulExt = "ConstructionBase"
InverseFunctionsUnitfulExt = "InverseFunctions"
[deps.Unitful.weakdeps]
ConstructionBase = "187b0558-2788-49d3-abe0-74a17ed4e7c9"
InverseFunctions = "3587e190-3f89-42d0-90ee-14403ec27112"
[[deps.WoodburyMatrices]]
deps = ["LinearAlgebra", "SparseArrays"]
git-tree-sha1 = "c1a7aa6219628fcd757dede0ca95e245c5cd9511"
uuid = "efce3f68-66dc-5838-9240-27a6d6f5f9b6"
version = "1.0.0"
[[deps.XML2_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Libiconv_jll", "Zlib_jll"]
git-tree-sha1 = "52ff2af32e591541550bd753c0da8b9bc92bb9d9"
uuid = "02c8fc9c-b97f-50b9-bbe4-9be30ff0a78a"
version = "2.12.7+0"
[[deps.XSLT_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Libgcrypt_jll", "Libgpg_error_jll", "Libiconv_jll", "Pkg", "XML2_jll", "Zlib_jll"]
git-tree-sha1 = "91844873c4085240b95e795f692c4cec4d805f8a"
uuid = "aed1982a-8fda-507f-9586-7b0439959a61"
version = "1.1.34+0"
[[deps.Xorg_libX11_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Xorg_libxcb_jll", "Xorg_xtrans_jll"]
git-tree-sha1 = "afead5aba5aa507ad5a3bf01f58f82c8d1403495"
uuid = "4f6342f7-b3d2-589e-9d20-edeb45f2b2bc"
version = "1.8.6+0"
[[deps.Xorg_libXau_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "6035850dcc70518ca32f012e46015b9beeda49d8"
uuid = "0c0b7dd1-d40b-584c-a123-a41640f87eec"
version = "1.0.11+0"
[[deps.Xorg_libXdmcp_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "34d526d318358a859d7de23da945578e8e8727b7"
uuid = "a3789734-cfe1-5b06-b2d0-1dd0d9d62d05"
version = "1.1.4+0"
[[deps.Xorg_libXext_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Xorg_libX11_jll"]
git-tree-sha1 = "d2d1a5c49fae4ba39983f63de6afcbea47194e85"
uuid = "1082639a-0dae-5f34-9b06-72781eeb8cb3"
version = "1.3.6+0"
[[deps.Xorg_libXrender_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Xorg_libX11_jll"]
git-tree-sha1 = "47e45cd78224c53109495b3e324df0c37bb61fbe"
uuid = "ea2f1a96-1ddc-540d-b46f-429655e07cfa"
version = "0.9.11+0"
[[deps.Xorg_libpthread_stubs_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "8fdda4c692503d44d04a0603d9ac0982054635f9"
uuid = "14d82f49-176c-5ed1-bb49-ad3f5cbd8c74"
version = "0.1.1+0"
[[deps.Xorg_libxcb_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "XSLT_jll", "Xorg_libXau_jll", "Xorg_libXdmcp_jll", "Xorg_libpthread_stubs_jll"]
git-tree-sha1 = "b4bfde5d5b652e22b9c790ad00af08b6d042b97d"
uuid = "c7cfdc94-dc32-55de-ac96-5a1b8d977c5b"
version = "1.15.0+0"
[[deps.Xorg_xtrans_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "e92a1a012a10506618f10b7047e478403a046c77"
uuid = "c5fb5394-a638-5e4d-96e5-b29de1b5cf10"
version = "1.5.0+0"
[[deps.Zlib_jll]]
deps = ["Libdl"]
uuid = "83775a58-1f1d-513f-b197-d71354ab007a"
version = "1.2.13+1"
[[deps.isoband_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "51b5eeb3f98367157a7a12a1fb0aa5328946c03c"
uuid = "9a68df92-36a6-505f-a73e-abb412b6bfb4"
version = "0.2.3+0"
[[deps.libaec_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "46bf7be2917b59b761247be3f317ddf75e50e997"
uuid = "477f73a3-ac25-53e9-8cc3-50b2fa2566f0"
version = "1.1.2+0"
[[deps.libaom_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "1827acba325fdcdf1d2647fc8d5301dd9ba43a9d"
uuid = "a4ae2306-e953-59d6-aa16-d00cac43593b"
version = "3.9.0+0"
[[deps.libass_jll]]
deps = ["Artifacts", "Bzip2_jll", "FreeType2_jll", "FriBidi_jll", "HarfBuzz_jll", "JLLWrappers", "Libdl", "Pkg", "Zlib_jll"]
git-tree-sha1 = "5982a94fcba20f02f42ace44b9894ee2b140fe47"
uuid = "0ac62f75-1d6f-5e53-bd7c-93b484bb37c0"
version = "0.15.1+0"
[[deps.libblastrampoline_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "8e850b90-86db-534c-a0d3-1478176c7d93"
version = "5.8.0+1"
[[deps.libfdk_aac_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "daacc84a041563f965be61859a36e17c4e4fcd55"
uuid = "f638f0a6-7fb0-5443-88ba-1cc74229b280"
version = "2.0.2+0"
[[deps.libpng_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Zlib_jll"]
git-tree-sha1 = "d7015d2e18a5fd9a4f47de711837e980519781a4"
uuid = "b53b4c65-9356-5827-b1ea-8c7a1a84506f"
version = "1.6.43+1"
[[deps.libsixel_jll]]
deps = ["Artifacts", "JLLWrappers", "JpegTurbo_jll", "Libdl", "Pkg", "libpng_jll"]
git-tree-sha1 = "d4f63314c8aa1e48cd22aa0c17ed76cd1ae48c3c"
uuid = "075b6546-f08a-558a-be8f-8157d0f608a5"
version = "1.10.3+0"
[[deps.libvorbis_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Ogg_jll", "Pkg"]
git-tree-sha1 = "b910cb81ef3fe6e78bf6acee440bda86fd6ae00c"
uuid = "f27f6e37-5d2b-51aa-960f-b287f2bc3b7a"
version = "1.3.7+1"
[[deps.nghttp2_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "8e850ede-7688-5339-a07c-302acd2aaf8d"
version = "1.52.0+1"
[[deps.oneTBB_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "7d0ea0f4895ef2f5cb83645fa689e52cb55cf493"
uuid = "1317d2d5-d96f-522e-a858-c73665f53c3e"
version = "2021.12.0+0"
[[deps.p7zip_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "3f19e933-33d8-53b3-aaab-bd5110c3b7a0"
version = "17.4.0+2"
[[deps.x264_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "4fea590b89e6ec504593146bf8b988b2c00922b2"
uuid = "1270edf5-f2f9-52d2-97e9-ab00b5d0237a"
version = "2021.5.5+0"
[[deps.x265_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "ee567a171cce03570d77ad3a43e90218e38937a9"
uuid = "dfaa095f-4041-5dcd-9319-2fabd8486b76"
version = "3.5.0+0"
"""
# ╔═╡ Cell order:
# ╟─f9d70766-96c9-4d06-bc78-a2b1761cb9f6
# ╟─5c7c55d4-36b5-4703-9018-c334c9465d50
# ╠═fdfb1464-908f-437b-aaa4-5a9e32dd2feb
# ╟─9210e64a-6939-4ed5-b6d1-41685535b2c8
# ╠═a1db5ffd-620a-4a70-b859-e90c9d6aa5fb
# ╠═066b6f99-5135-49d2-a7de-cfb22ed72a3d
# ╠═2c945712-cbee-4a34-9a3b-ba602aa5fac0
# ╟─224e7904-1004-4d9a-aec9-4dae9e797e3a
# ╠═712c91c0-0e41-4661-99ca-a17e50ce4f80
# ╟─7e86d631-195c-4647-a99a-7fb2da4e79ff
# ╠═62c3f1b5-5cf7-4b69-8db4-9f380067667b
# ╟─c2de9482-7a09-4540-83d3-8d434200683a
# ╟─038c9c6a-a369-4504-8783-2a4c56c051ae
# ╟─52d5d6d5-f331-4d4b-a150-577706b3f87a
# ╟─1f95ecf2-f166-4e24-b124-d950cf4942d9
# ╠═14a0cf26-0003-45fe-b03c-8ad0140d26b2
# ╠═92ac6f39-7946-49dd-bc8c-b7f7ee430d66
# ╟─9cc4dfb7-ceb7-4d0f-99f6-dc48825c93e1
# ╟─f1266f66-0baf-45fa-aa20-a6279bff5cd8
# ╠═2017d76d-1a5e-447d-b569-9edbb5c2cd13
# ╟─401afd52-247f-4159-88c9-91b9ff75e925
# ╠═09b91268-7365-4cae-88ce-21ab78e0ce8c
# ╟─00ce15cd-404a-458e-b557-e1b5c55c41c2
# ╟─00000000-0000-0000-0000-000000000001
# ╟─00000000-0000-0000-0000-000000000002
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 452 | module LossFunctionsExt
using GCPDecompositions, LossFunctions
using IntervalSets
const SupportedLosses = Union{LossFunctions.DistanceLoss,LossFunctions.MarginLoss}
GCPLosses.value(loss::SupportedLosses, x, m) = loss(m, x)
GCPLosses.deriv(loss::SupportedLosses, x, m) = LossFunctions.deriv(loss, m, x)
GCPLosses.domain(::LossFunctions.DistanceLoss) = Interval(-Inf, Inf)
GCPLosses.domain(::LossFunctions.MarginLoss) = Interval(-Inf, Inf)
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 3586 | """
Generalized CP Decomposition module. Provides approximate CP tensor decomposition with respect to general losses.
"""
module GCPDecompositions
# Imports
import Base: ndims, size, show, summary
import Base: getindex
import LinearAlgebra: norm
using IntervalSets: Interval
using Random: default_rng
# Exports
export CPD
export ncomps
export gcp
export GCPLosses, GCPConstraints, GCPAlgorithms
include("tensor-kernels.jl")
include("cpd.jl")
include("gcp-losses.jl")
include("gcp-constraints.jl")
include("gcp-algorithms.jl")
if !isdefined(Base, :get_extension)
include("../ext/LossFunctionsExt.jl")
end
# Main fitting function
"""
gcp(X::Array, r;
loss = GCPLosses.LeastSquares(),
constraints = default_constraints(loss),
algorithm = default_algorithm(X, r, loss, constraints),
init = default_init(X, r, loss, constraints, algorithm))
Compute an approximate rank-`r` CP decomposition of the tensor `X`
with respect to the loss function `loss` and return a `CPD` object.
Keyword arguments:
+ `constraints` : a `Tuple` of constraints on the factor matrices `U = (U[1],...,U[N])`.
+ `algorithm` : algorithm to use
Conventional CP corresponds to the default `GCPLosses.LeastSquares()` loss
with the default of no constraints (i.e., `constraints = ()`).
If the LossFunctions.jl package is also loaded,
`loss` can also be a loss function from that package.
Check `GCPDecompositions.LossFunctionsExt.SupportedLosses`
to see what losses are supported.
See also: `CPD`, `GCPLosses`, `GCPConstraints`, `GCPAlgorithms`.
"""
gcp(
X::Array,
r;
loss = GCPLosses.LeastSquares(),
constraints = default_constraints(loss),
algorithm = default_algorithm(X, r, loss, constraints),
init = default_init(X, r, loss, constraints, algorithm),
) = GCPAlgorithms._gcp(X, r, loss, constraints, algorithm, init)
# Defaults
"""
default_constraints(loss)
Return a default tuple of constraints for the loss function `loss`.
See also: `gcp`.
"""
function default_constraints(loss)
dom = GCPLosses.domain(loss)
if dom == Interval(-Inf, +Inf)
return ()
elseif dom == Interval(0.0, +Inf)
return (GCPConstraints.LowerBound(0.0),)
else
error(
"only loss functions with a domain of `-Inf .. Inf` or `0 .. Inf` are (currently) supported",
)
end
end
"""
default_algorithm(X, r, loss, constraints)
Return a default algorithm for the data tensor `X`, rank `r`,
loss function `loss`, and tuple of constraints `constraints`.
See also: `gcp`.
"""
default_algorithm(X::Array{<:Real}, r, loss::GCPLosses.LeastSquares, constraints::Tuple{}) =
GCPAlgorithms.FastALS()
default_algorithm(X, r, loss, constraints) = GCPAlgorithms.LBFGSB()
"""
default_init([rng=default_rng()], X, r, loss, constraints, algorithm)
Return a default initialization for the data tensor `X`, rank `r`,
loss function `loss`, tuple of constraints `constraints`, and
algorithm `algorithm`, using the random number generator `rng` if needed.
See also: `gcp`.
"""
default_init(X, r, loss, constraints, algorithm) =
default_init(default_rng(), X, r, loss, constraints, algorithm)
function default_init(rng, X, r, loss, constraints, algorithm)
# Generate CPD with random factors
T, N = nonmissingtype(eltype(X)), ndims(X)
T = promote_type(T, Float64)
M = CPD(ones(T, r), rand.(rng, T, size(X), r))
# Normalize
Mnorm = norm(M)
Xnorm = sqrt(sum(abs2, skipmissing(X)))
for k in Base.OneTo(N)
M.U[k] .*= (Xnorm / Mnorm)^(1 / N)
end
return M
end
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 2922 | ## CP decomposition type
"""
CPD
Tensor decomposition type for the canonical polyadic decompositions (CPD)
of a tensor (i.e., a multi-dimensional array) `A`.
This is the return type of `gcp(_)`,
the corresponding tensor decomposition function.
If `M::CPD` is the decomposition object,
the weights `λ` and the factor matrices `U = (U[1],...,U[N])`
can be obtained via `M.λ` and `M.U`,
such that `A = Σ_j λ[j] U[1][:,j] ∘ ⋯ ∘ U[N][:,j]`.
"""
struct CPD{T,N,Tλ<:AbstractVector{T},TU<:AbstractMatrix{T}}
λ::Tλ
U::NTuple{N,TU}
function CPD{T,N,Tλ,TU}(λ, U) where {T,N,Tλ<:AbstractVector{T},TU<:AbstractMatrix{T}}
Base.require_one_based_indexing(λ, U...)
for k in Base.OneTo(N)
size(U[k], 2) == length(λ) || throw(
DimensionMismatch(
"U[$k] has dimensions $(size(U[k])) but λ has length $(length(λ))",
),
)
end
return new{T,N,Tλ,TU}(λ, U)
end
end
CPD(λ::Tλ, U::NTuple{N,TU}) where {T,N,Tλ<:AbstractVector{T},TU<:AbstractMatrix{T}} =
CPD{T,N,Tλ,TU}(λ, U)
"""
ncomps(M::CPD)
Return the number of components in `M`.
See also: `ndims`, `size`.
"""
ncomps(M::CPD) = length(M.λ)
ndims(::CPD{T,N}) where {T,N} = N
size(M::CPD{T,N}, dim::Integer) where {T,N} = dim <= N ? size(M.U[dim], 1) : 1
size(M::CPD{T,N}) where {T,N} = ntuple(d -> size(M, d), N)
function show(io::IO, mime::MIME{Symbol("text/plain")}, M::CPD{T,N}) where {T,N}
# Compute displaysize for showing fields
LINES, COLUMNS = displaysize(io)
LINES_FIELD = max(LINES - 2 - N, 0) ÷ (1 + N)
io_field = IOContext(io, :displaysize => (LINES_FIELD, COLUMNS))
# Show summary and fields
summary(io, M)
println(io)
println(io, "λ weights:")
show(io_field, mime, M.λ)
for k in Base.OneTo(N)
println(io, "\nU[$k] factor matrix:")
show(io_field, mime, M.U[k])
end
end
function summary(io::IO, M::CPD)
dimstring =
ndims(M) == 0 ? "0-dimensional" :
ndims(M) == 1 ? "$(size(M,1))-element" : join(map(string, size(M)), '×')
_ncomps = ncomps(M)
return print(
io,
dimstring,
" ",
typeof(M),
" with ",
_ncomps,
_ncomps == 1 ? " component" : " components",
)
end
function getindex(M::CPD{T,N}, I::Vararg{Int,N}) where {T,N}
@boundscheck Base.checkbounds_indices(Bool, axes(M), I) || Base.throw_boundserror(M, I)
val = zero(eltype(T))
for j in Base.OneTo(ncomps(M))
val += M.λ[j] * prod(M.U[k][I[k], j] for k in Base.OneTo(ndims(M)))
end
return val
end
getindex(M::CPD{T,N}, I::CartesianIndex{N}) where {T,N} = getindex(M, Tuple(I)...)
norm(M::CPD, p::Real = 2) =
p == 2 ? norm2(M) : norm((M[I] for I in CartesianIndices(size(M))), p)
function norm2(M::CPD{T,N}) where {T,N}
V = reduce(.*, M.U[i]'M.U[i] for i in 1:N)
return sqrt(abs(M.λ' * V * M.λ))
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 980 | ## Algorithm types
"""
Algorithms for Generalized CP Decomposition.
"""
module GCPAlgorithms
using ..GCPDecompositions
using ..TensorKernels: create_mttkrp_buffer, mttkrp!
using ..TensorKernels: khatrirao!, khatrirao
using IntervalSets: Interval
using LinearAlgebra: lu!, mul!, norm, rdiv!
using LBFGSB: lbfgsb
"""
AbstractAlgorithm
Abstract type for GCP algorithms.
Concrete types `ConcreteAlgorithm <: AbstractAlgorithm` should implement
`_gcp(X, r, loss, constraints, algorithm::ConcreteAlgorithm)`
that returns a `CPD`.
"""
abstract type AbstractAlgorithm end
"""
_gcp(X, r, loss, constraints, algorithm)
Internal function to compute an approximate rank-`r` CP decomposition
of the tensor `X` with respect to the loss function `loss` and the
constraints `constraints` using the algorithm `algorithm`, returning
a `CPD` object.
"""
function _gcp end
include("gcp-algorithms/lbfgsb.jl")
include("gcp-algorithms/als.jl")
include("gcp-algorithms/fastals.jl")
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 497 | ## Constraint types
"""
Constraints for Generalized CP Decomposition.
"""
module GCPConstraints
# Abstract type
"""
AbstractConstraint
Abstract type for GCP constraints on the factor matrices `U = (U[1],...,U[N])`.
"""
abstract type AbstractConstraint end
# Concrete types
"""
LowerBound(value::Real)
Lower-bound constraint on the entries of the factor matrices
`U = (U[1],...,U[N])`, i.e., `U[i][j,k] >= value`.
"""
struct LowerBound{T} <: AbstractConstraint
value::T
end
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 12974 | ## Loss function types
"""
Loss functions for Generalized CP Decomposition.
"""
module GCPLosses
using ..GCPDecompositions
using ..TensorKernels: mttkrps!
using IntervalSets: Interval
using LinearAlgebra: mul!, rmul!, Diagonal
import ForwardDiff
# Abstract type
"""
AbstractLoss
Abstract type for GCP loss functions ``f(x,m)``,
where ``x`` is the data entry and ``m`` is the model entry.
Concrete types `ConcreteLoss <: AbstractLoss` should implement:
- `value(loss::ConcreteLoss, x, m)` that computes the value of the loss function ``f(x,m)``
- `deriv(loss::ConcreteLoss, x, m)` that computes the value of the partial derivative ``\\partial_m f(x,m)`` with respect to ``m``
- `domain(loss::ConcreteLoss)` that returns an `Interval` from IntervalSets.jl defining the domain for ``m``
"""
abstract type AbstractLoss end
"""
value(loss, x, m)
Compute the value of the (entrywise) loss function `loss`
for data entry `x` and model entry `m`.
"""
function value end
"""
deriv(loss, x, m)
Compute the derivative of the (entrywise) loss function `loss`
at the model entry `m` for the data entry `x`.
"""
function deriv end
"""
domain(loss)
Return the domain of the (entrywise) loss function `loss`.
"""
function domain end
# Objective function and gradients
"""
objective(M::CPD, X::AbstractArray, loss)
Compute the GCP objective function for the model tensor `M`, data tensor `X`,
and loss function `loss`.
"""
function objective(M::CPD{T,N}, X::Array{TX,N}, loss) where {T,TX,N}
return sum(value(loss, X[I], M[I]) for I in CartesianIndices(X) if !ismissing(X[I]))
end
"""
grad_U!(GU, M::CPD, X::AbstractArray, loss)
Compute the GCP gradient with respect to the factor matrices `U = (U[1],...,U[N])`
for the model tensor `M`, data tensor `X`, and loss function `loss`, and store
the result in `GU = (GU[1],...,GU[N])`.
"""
function grad_U!(
GU::NTuple{N,TGU},
M::CPD{T,N},
X::Array{TX,N},
loss,
) where {T,TX,N,TGU<:AbstractMatrix{T}}
Y = [
ismissing(X[I]) ? zero(nonmissingtype(eltype(X))) : deriv(loss, X[I], M[I]) for
I in CartesianIndices(X)
]
mttkrps!(GU, Y, M.U)
for k in 1:N
rmul!(GU[k], Diagonal(M.λ))
end
return GU
end
# Statistically motivated losses
"""
LeastSquares()
Loss corresponding to conventional CP decomposition.
Corresponds to a statistical assumption of Gaussian data `X`
with mean given by the low-rank model tensor `M`.
- **Distribution:** ``x_i \\sim \\mathcal{N}(\\mu_i, \\sigma)``
- **Link function:** ``m_i = \\mu_i``
- **Loss function:** ``f(x,m) = (x-m)^2``
- **Domain:** ``m \\in \\mathbb{R}``
"""
struct LeastSquares <: AbstractLoss end
value(::LeastSquares, x, m) = (x - m)^2
deriv(::LeastSquares, x, m) = 2 * (m - x)
domain(::LeastSquares) = Interval(-Inf, +Inf)
"""
NonnegativeLeastSquares()
Loss corresponding to nonnegative CP decomposition.
Corresponds to a statistical assumption of Gaussian data `X`
with nonnegative mean given by the low-rank model tensor `M`.
- **Distribution:** ``x_i \\sim \\mathcal{N}(\\mu_i, \\sigma)``
- **Link function:** ``m_i = \\mu_i``
- **Loss function:** ``f(x,m) = (x-m)^2``
- **Domain:** ``m \\in [0, \\infty)``
"""
struct NonnegativeLeastSquares <: AbstractLoss end
value(::NonnegativeLeastSquares, x, m) = (x - m)^2
deriv(::NonnegativeLeastSquares, x, m) = 2 * (m - x)
domain(::NonnegativeLeastSquares) = Interval(0.0, Inf)
"""
Poisson(eps::Real = 1e-10)
Loss corresponding to a statistical assumption of Poisson data `X`
with rate given by the low-rank model tensor `M`.
- **Distribution:** ``x_i \\sim \\operatorname{Poisson}(\\lambda_i)``
- **Link function:** ``m_i = \\lambda_i``
- **Loss function:** ``f(x,m) = m - x \\log(m + \\epsilon)``
- **Domain:** ``m \\in [0, \\infty)``
"""
struct Poisson{T<:Real} <: AbstractLoss
eps::T
Poisson{T}(eps::T) where {T<:Real} =
eps >= zero(eps) ? new(eps) :
throw(DomainError(eps, "Poisson loss requires nonnegative `eps`"))
end
Poisson(eps::T = 1e-10) where {T<:Real} = Poisson{T}(eps)
value(loss::Poisson, x, m) = m - x * log(m + loss.eps)
deriv(loss::Poisson, x, m) = one(m) - x / (m + loss.eps)
domain(::Poisson) = Interval(0.0, +Inf)
"""
PoissonLog()
Loss corresponding to a statistical assumption of Poisson data `X`
with log-rate given by the low-rank model tensor `M`.
- **Distribution:** ``x_i \\sim \\operatorname{Poisson}(\\lambda_i)``
- **Link function:** ``m_i = \\log \\lambda_i``
- **Loss function:** ``f(x,m) = e^m - x m``
- **Domain:** ``m \\in \\mathbb{R}``
"""
struct PoissonLog <: AbstractLoss end
value(::PoissonLog, x, m) = exp(m) - x * m
deriv(::PoissonLog, x, m) = exp(m) - x
domain(::PoissonLog) = Interval(-Inf, +Inf)
"""
Gamma(eps::Real = 1e-10)
Loss corresponding to a statistical assumption of Gamma-distributed data `X`
with scale given by the low-rank model tensor `M`.
- **Distribution:** ``x_i \\sim \\operatorname{Gamma}(k, \\sigma_i)``
- **Link function:** ``m_i = k \\sigma_i``
- **Loss function:** ``f(x,m) = \\frac{x}{m + \\epsilon} + \\log(m + \\epsilon)``
- **Domain:** ``m \\in [0, \\infty)``
"""
struct Gamma{T<:Real} <: AbstractLoss
eps::T
Gamma{T}(eps::T) where {T<:Real} =
eps >= zero(eps) ? new(eps) :
throw(DomainError(eps, "Gamma loss requires nonnegative `eps`"))
end
Gamma(eps::T = 1e-10) where {T<:Real} = Gamma{T}(eps)
value(loss::Gamma, x, m) = x / (m + loss.eps) + log(m + loss.eps)
deriv(loss::Gamma, x, m) = -x / (m + loss.eps)^2 + inv(m + loss.eps)
domain(::Gamma) = Interval(0.0, +Inf)
"""
Rayleigh(eps::Real = 1e-10)
Loss corresponding to the statistical assumption of Rayleigh data `X`
with sacle given by the low-rank model tensor `M`
- **Distribution:** ``x_i \\sim \\operatorname{Rayleigh}(\\theta_i)``
- **Link function:** ``m_i = \\sqrt{\\frac{\\pi}{2}\\theta_i}``
- **Loss function:** ``f(x, m) = 2\\log(m + \\epsilon) + \\frac{\\pi}{4}(\\frac{x}{m + \\epsilon})^2``
- **Domain:** ``m \\in [0, \\infty)``
"""
struct Rayleigh{T<:Real} <: AbstractLoss
eps::T
Rayleigh{T}(eps::T) where {T<:Real} =
eps >= zero(eps) ? new(eps) :
throw(DomainError(eps, "Rayleigh loss requires nonnegative `eps`"))
end
Rayleigh(eps::T = 1e-10) where {T<:Real} = Rayleigh{T}(eps)
value(loss::Rayleigh, x, m) = 2 * log(m + loss.eps) + (pi / 4) * ((x / (m + loss.eps))^2)
deriv(loss::Rayleigh, x, m) = 2 / (m + loss.eps) - (pi / 2) * (x^2 / (m + loss.eps)^3)
domain(::Rayleigh) = Interval(0.0, +Inf)
"""
BernoulliOdds(eps::Real = 1e-10)
Loss corresponding to the statistical assumption of Bernouli data `X`
with odds-sucess rate given by the low-rank model tensor `M`
- **Distribution:** ``x_i \\sim \\operatorname{Bernouli}(\\rho_i)``
- **Link function:** ``m_i = \\frac{\\rho_i}{1 - \\rho_i}``
- **Loss function:** ``f(x, m) = \\log(m + 1) - x\\log(m + \\epsilon)``
- **Domain:** ``m \\in [0, \\infty)``
"""
struct BernoulliOdds{T<:Real} <: AbstractLoss
eps::T
BernoulliOdds{T}(eps::T) where {T<:Real} =
eps >= zero(eps) ? new(eps) :
throw(DomainError(eps, "BernoulliOdds requires nonnegative `eps`"))
end
BernoulliOdds(eps::T = 1e-10) where {T<:Real} = BernoulliOdds{T}(eps)
value(loss::BernoulliOdds, x, m) = log(m + 1) - x * log(m + loss.eps)
deriv(loss::BernoulliOdds, x, m) = 1 / (m + 1) - (x / (m + loss.eps))
domain(::BernoulliOdds) = Interval(0.0, +Inf)
"""
BernoulliLogit(eps::Real = 1e-10)
Loss corresponding to the statistical assumption of Bernouli data `X`
with log odds-success rate given by the low-rank model tensor `M`
- **Distribution:** ``x_i \\sim \\operatorname{Bernouli}(\\rho_i)``
- **Link function:** ``m_i = \\log(\\frac{\\rho_i}{1 - \\rho_i})``
- **Loss function:** ``f(x, m) = \\log(1 + e^m) - xm``
- **Domain:** ``m \\in \\mathbb{R}``
"""
struct BernoulliLogit{T<:Real} <: AbstractLoss
eps::T
BernoulliLogit{T}(eps::T) where {T<:Real} =
eps >= zero(eps) ? new(eps) :
throw(DomainError(eps, "BernoulliLogitsLoss requires nonnegative `eps`"))
end
BernoulliLogit(eps::T = 1e-10) where {T<:Real} = BernoulliLogit{T}(eps)
value(::BernoulliLogit, x, m) = log(1 + exp(m)) - x * m
deriv(::BernoulliLogit, x, m) = exp(m) / (1 + exp(m)) - x
domain(::BernoulliLogit) = Interval(-Inf, +Inf)
"""
NegativeBinomialOdds(r::Integer, eps::Real = 1e-10)
Loss corresponding to the statistical assumption of Negative Binomial
data `X` with log odds failure rate given by the low-rank model tensor `M`
- **Distribution:** ``x_i \\sim \\operatorname{NegativeBinomial}(r, \\rho_i) ``
- **Link function:** ``m = \\frac{\\rho}{1 - \\rho}``
- **Loss function:** ``f(x, m) = (r + x) \\log(1 + m) - x\\log(m + \\epsilon) ``
- **Domain:** ``m \\in [0, \\infty)``
"""
struct NegativeBinomialOdds{S<:Integer,T<:Real} <: AbstractLoss
r::S
eps::T
function NegativeBinomialOdds{S,T}(r::S, eps::T) where {S<:Integer,T<:Real}
eps >= zero(eps) ||
throw(DomainError(eps, "NegativeBinomialOdds requires nonnegative `eps`"))
r >= zero(r) ||
throw(DomainError(r, "NegativeBinomialOdds requires nonnegative `r`"))
return new(r, eps)
end
end
NegativeBinomialOdds(r::S, eps::T = 1e-10) where {S<:Integer,T<:Real} =
NegativeBinomialOdds{S,T}(r, eps)
value(loss::NegativeBinomialOdds, x, m) = (loss.r + x) * log(1 + m) - x * log(m + loss.eps)
deriv(loss::NegativeBinomialOdds, x, m) = (loss.r + x) / (1 + m) - x / (m + loss.eps)
domain(::NegativeBinomialOdds) = Interval(0.0, +Inf)
"""
Huber(Δ::Real)
Huber Loss for given Δ
- **Loss function:** ``f(x, m) = (x - m)^2 if \\abs(x - m)\\leq\\Delta, 2\\Delta\\abs(x - m) - \\Delta^2 otherwise``
- **Domain:** ``m \\in \\mathbb{R}``
"""
struct Huber{T<:Real} <: AbstractLoss
Δ::T
Huber{T}(Δ::T) where {T<:Real} =
Δ >= zero(Δ) ? new(Δ) : throw(DomainError(Δ, "Huber requires nonnegative `Δ`"))
end
Huber(Δ::T) where {T<:Real} = Huber{T}(Δ)
value(loss::Huber, x, m) =
abs(x - m) <= loss.Δ ? (x - m)^2 : 2 * loss.Δ * abs(x - m) - loss.Δ^2
deriv(loss::Huber, x, m) =
abs(x - m) <= loss.Δ ? -2 * (x - m) : -2 * sign(x - m) * loss.Δ * x
domain(::Huber) = Interval(-Inf, +Inf)
"""
BetaDivergence(β::Real, eps::Real)
BetaDivergence Loss for given β
- **Loss function:** ``f(x, m; β) = \\frac{1}{\\beta}m^{\\beta} - \\frac{1}{\\beta - 1}xm^{\\beta - 1}
if \\beta \\in \\mathbb{R} \\{0, 1\\},
m - x\\log(m) if \\beta = 1,
\\frac{x}{m} + \\log(m) if \\beta = 0``
- **Domain:** ``m \\in [0, \\infty)``
"""
struct BetaDivergence{S<:Real,T<:Real} <: AbstractLoss
β::T
eps::T
BetaDivergence{S,T}(β::S, eps::T) where {S<:Real,T<:Real} =
eps >= zero(eps) ? new(β, eps) :
throw(DomainError(eps, "BetaDivergence requires nonnegative `eps`"))
end
BetaDivergence(β::S, eps::T = 1e-10) where {S<:Real,T<:Real} = BetaDivergence{S,T}(β, eps)
function value(loss::BetaDivergence, x, m)
if loss.β == 0
return x / (m + loss.eps) + log(m + loss.eps)
elseif loss.β == 1
return m - x * log(m + loss.eps)
else
return 1 / loss.β * m^loss.β - 1 / (loss.β - 1) * x * m^(loss.β - 1)
end
end
function deriv(loss::BetaDivergence, x, m)
if loss.β == 0
return -x / (m + loss.eps)^2 + 1 / (m + loss.eps)
elseif loss.β == 1
return 1 - x / (m + loss.eps)
else
return m^(loss.β - 1) - x * m^(loss.β - 2)
end
end
domain(::BetaDivergence) = Interval(0.0, +Inf)
# User-defined loss
"""
UserDefined
Type for user-defined loss functions ``f(x,m)``,
where ``x`` is the data entry and ``m`` is the model entry.
Contains three fields:
1. `func::Function` : function that evaluates the loss function ``f(x,m)``
2. `deriv::Function` : function that evaluates the partial derivative ``\\partial_m f(x,m)`` with respect to ``m``
3. `domain::Interval` : `Interval` from IntervalSets.jl defining the domain for ``m``
The constructor is `UserDefined(func; deriv, domain)`.
If not provided,
- `deriv` is automatically computed from `func` using forward-mode automatic differentiation
- `domain` gets a default value of `Interval(-Inf, +Inf)`
"""
struct UserDefined <: AbstractLoss
func::Function
deriv::Function
domain::Interval
function UserDefined(
func::Function;
deriv::Function = (x, m) -> ForwardDiff.derivative(m -> func(x, m), m),
domain::Interval = Interval(-Inf, Inf),
)
hasmethod(func, Tuple{Real,Real}) ||
error("`func` must accept two inputs `(x::Real, m::Real)`")
hasmethod(deriv, Tuple{Real,Real}) ||
error("`deriv` must accept two inputs `(x::Real, m::Real)`")
return new(func, deriv, domain)
end
end
value(loss::UserDefined, x, m) = loss.func(x, m)
deriv(loss::UserDefined, x, m) = loss.deriv(x, m)
domain(loss::UserDefined) = loss.domain
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 352 | ## Tensor Kernels
"""
Tensor kernels for Generalized CP Decomposition.
"""
module TensorKernels
using Compat: allequal
using LinearAlgebra: mul!
export create_mttkrp_buffer, mttkrp, mttkrp!, mttkrps, mttkrps!, khatrirao, khatrirao!
include("tensor-kernels/khatrirao.jl")
include("tensor-kernels/mttkrp.jl")
include("tensor-kernels/mttkrps.jl")
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 1001 | ## Algorithm: LBFGSB
"""
ALS
**A**lternating **L**east **S**quares.
Workhorse algorithm for `LeastSquares` loss with no constraints.
Algorithm parameters:
- `maxiters::Int` : max number of iterations (default: `200`)
"""
Base.@kwdef struct ALS <: AbstractAlgorithm
maxiters::Int = 200
end
function _gcp(
X::Array{TX,N},
r,
loss::GCPLosses.LeastSquares,
constraints::Tuple{},
algorithm::GCPAlgorithms.ALS,
init,
) where {TX<:Real,N}
# Initialization
M = deepcopy(init)
# Pre-allocate MTTKRP buffers
mttkrp_buffers = ntuple(n -> create_mttkrp_buffer(X, M.U, n), N)
# Alternating Least Squares (ALS) iterations
for _ in 1:algorithm.maxiters
for n in 1:N
V = reduce(.*, M.U[i]'M.U[i] for i in setdiff(1:N, n))
mttkrp!(M.U[n], X, M.U, n, mttkrp_buffers[n])
rdiv!(M.U[n], lu!(V))
M.λ .= norm.(eachcol(M.U[n]))
M.U[n] ./= permutedims(M.λ)
end
end
return M
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 7284 | ## Algorithm: FastALS
"""
FastALS
**Fast** **A**lternating **L**east **S**quares.
Efficient ALS algorithm proposed in:
> **Fast Alternating LS Algorithms for High Order
> CANDECOMP/PARAFAC Tensor Factorizations**.
> Anh-Huy Phan, Petr Tichavský, Andrzej Cichocki.
> *IEEE Transactions on Signal Processing*, 2013.
> DOI: 10.1109/TSP.2013.2269903
Algorithm parameters:
- `maxiters::Int` : max number of iterations (default: `200`)
"""
Base.@kwdef struct FastALS <: AbstractAlgorithm
maxiters::Int = 200
end
function _gcp(
X::Array{TX,N},
r,
loss::GCPLosses.LeastSquares,
constraints::Tuple{},
algorithm::GCPAlgorithms.FastALS,
init,
) where {TX<:Real,N}
# Initialization
M = deepcopy(init)
# Determine order of modes of MTTKRP to compute
Jns = [prod(size(X)[1:n]) for n in 1:N]
Kns = [prod(size(X)[n+1:end]) for n in 1:N]
Kn_minus_ones = [prod(size(X)[n:end]) for n in 1:N]
n_star = findlast(n -> Jns[n] <= Kn_minus_ones[n], 1:N)
order = vcat([i for i in n_star:-1:1], [i for i in n_star+1:N])
buffers = create_FastALS_buffers(M.U, order, Jns, Kns)
for _ in 1:algorithm.maxiters
FastALS_iter!(X, M, order, Jns, Kns, buffers)
end
return M
end
"""
FastALS_iter!(X, U, λ)
Algorithm for computing MTTKRP sequences is from "Fast Alternating LS Algorithms
for High Order CANDECOMP/PARAFAC Tensor Factorizations" by Phan et al., specifically
section III-C.
"""
function FastALS_iter!(X, M, order, Jns, Kns, buffers)
N = ndims(X)
R = size(M.U[1])[2]
# Compute MTTKRPs recursively
n_star = order[1]
for n in order
if n == n_star
kr_right = khatrirao!(buffers.kr_buffer_descending, M.U[reverse(n_star+1:N)]...)
if n_star == 1
mul!(M.U[n], reshape(X, (Jns[n], Kns[n])), kr_right)
else
mul!(buffers.descending_buffers[1], reshape(X, (Jns[n], Kns[n])), kr_right)
_rl_outer_multiplication!(
buffers.descending_buffers[1],
M.U,
buffers.helper_buffers_descending[n_star-n+1],
n,
)
end
elseif n == n_star + 1
kr_left = khatrirao!(buffers.kr_buffer_ascending, M.U[reverse(1:n-1)]...)
if n == N
mul!(M.U[n], reshape(X, (Jns[n-1], Kns[n-1]))', kr_left)
else
mul!(
buffers.ascending_buffers[1],
(reshape(X, (Jns[n-1], Kns[n-1])))',
kr_left,
)
_lr_outer_multiplication!(
buffers.ascending_buffers[1],
M.U,
buffers.helper_buffers_ascending[n-n_star],
n,
)
end
elseif n < n_star
if n == 1
for r in 1:R
mul!(
view(M.U[n], :, r),
reshape(
view(buffers.descending_buffers[n_star-n], :, r),
(Jns[n], size(X)[n+1]),
),
view(M.U[n+1], :, r),
)
end
else
for r in 1:R
mul!(
view(buffers.descending_buffers[n_star-n+1], :, r),
reshape(
view(buffers.descending_buffers[n_star-n], :, r),
(Jns[n], size(X)[n+1]),
),
view(M.U[n+1], :, r),
)
end
_rl_outer_multiplication!(
buffers.descending_buffers[n_star-n+1],
M.U,
buffers.helper_buffers_descending[n_star-n+1],
n,
)
end
else
if n == N
for r in 1:R
mul!(
view(M.U[n], :, r),
reshape(
view(buffers.ascending_buffers[N-n_star-1], :, r),
(size(X)[n-1], Kns[n-1]),
)',
view(M.U[n-1], :, r),
)
end
else
for r in 1:R
mul!(
view(buffers.ascending_buffers[n-n_star], :, r),
reshape(
view(buffers.ascending_buffers[n-n_star-1], :, r),
(size(X)[n-1], Kns[n-1]),
)',
view(M.U[n-1], :, r),
)
end
_lr_outer_multiplication!(
buffers.ascending_buffers[n-n_star],
M.U,
buffers.helper_buffers_ascending[n-n_star],
n,
)
end
end
# Normalization, update weights
V = reduce(.*, M.U[i]'M.U[i] for i in setdiff(1:N, n))
rdiv!(M.U[n], lu!(V))
M.λ .= norm.(eachcol(M.U[n]))
M.U[n] ./= permutedims(M.λ)
end
end
# Helper function for right-to-left outer multiplications
function _rl_outer_multiplication!(Zn, U, kr_buffer, n)
khatrirao!(kr_buffer, U[reverse(1:n-1)]...)
for r in 1:size(U[n])[2]
mul!(
view(U[n], :, r),
reshape(view(Zn, :, r), (prod(size(U[i])[1] for i in 1:n-1), size(U[n])[1]))',
view(kr_buffer, :, r),
)
end
end
# Helper function for left-to-right outer multiplications
function _lr_outer_multiplication!(Zn, U, kr_buffer, n)
khatrirao!(kr_buffer, U[reverse(n+1:length(U))]...)
for r in 1:size(U[n])[2]
mul!(
view(U[n], :, r),
reshape(
view(Zn, :, r),
(size(U[n])[1], prod(size(U[i])[1] for i in n+1:length(U))),
),
view(kr_buffer, :, r),
)
end
end
function create_FastALS_buffers(
U::NTuple{N,TM},
order,
Jns,
Kns,
) where {TM<:AbstractMatrix,N}
n_star = order[1]
r = size(U[1])[2]
dims = [size(U[u])[1] for u in 1:length(U)]
# Allocate buffers
# Buffer for saved products between modes
descending_buffers =
n_star < 2 ? nothing : [similar(U[1], (Jns[n], r)) for n in n_star:-1:2]
ascending_buffers =
N - n_star - 1 < 1 ? nothing : [similar(U[1], (Kns[n], r)) for n in n_star:N]
# Buffers for khatri-rao products
kr_buffer_descending = similar(U[1], (Kns[n_star], r))
kr_buffer_ascending = similar(U[1], (Jns[n_star], r))
# Buffers for khatri-rao product in helper function
helper_buffers_descending =
n_star < 2 ? nothing : [similar(U[1], (prod(dims[1:n-1]), r)) for n in n_star:-1:2]
helper_buffers_ascending =
n_star >= N - 1 ? nothing :
[similar(U[1], (prod(dims[n+1:N]), r)) for n in n_star+1:N-1]
return (;
descending_buffers,
ascending_buffers,
kr_buffer_descending,
kr_buffer_ascending,
helper_buffers_descending,
helper_buffers_ascending,
)
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 3281 | ## Algorithm: LBFGSB
"""
LBFGSB
**L**imited-memory **BFGS** with **B**ox constraints.
Brief description of algorithm parameters:
- `m::Int` : max number of variable metric corrections (default: `10`)
- `factr::Float64` : function tolerance in units of machine epsilon (default: `1e7`)
- `pgtol::Float64` : (projected) gradient tolerance (default: `1e-5`)
- `maxfun::Int` : max number of function evaluations (default: `15000`)
- `maxiter::Int` : max number of iterations (default: `15000`)
- `iprint::Int` : verbosity (default: `-1`)
+ `iprint < 0` means no output
+ `iprint = 0` prints only one line at the last iteration
+ `0 < iprint < 99` prints `f` and `|proj g|` every `iprint` iterations
+ `iprint = 99` prints details of every iteration except n-vectors
+ `iprint = 100` also prints the changes of active set and final `x`
+ `iprint > 100` prints details of every iteration including `x` and `g`
See documentation of [LBFGSB.jl](https://github.com/Gnimuc/LBFGSB.jl) for more details.
"""
Base.@kwdef struct LBFGSB <: AbstractAlgorithm
m::Int = 10
factr::Float64 = 1e7
pgtol::Float64 = 1e-5
maxfun::Int = 15000
maxiter::Int = 15000
iprint::Int = -1
end
function _gcp(
X::Array{TX,N},
r,
loss,
constraints::Tuple{Vararg{GCPConstraints.LowerBound}},
algorithm::GCPAlgorithms.LBFGSB,
init,
) where {TX,N}
# T = promote_type(nonmissingtype(TX), Float64)
T = Float64 # LBFGSB.jl seems to only support Float64
# Compute lower bound from constraints
lower = maximum(constraint.value for constraint in constraints; init = T(-Inf))
# Error for unsupported loss/constraint combinations
dom = GCPLosses.domain(loss)
if dom == Interval(-Inf, +Inf)
lower in (-Inf, 0.0) || error(
"only lower bound constraints of `-Inf` or `0` are (currently) supported for loss functions with a domain of `-Inf .. Inf`",
)
elseif dom == Interval(0.0, +Inf)
lower == 0.0 || error(
"only lower bound constraints of `0` are (currently) supported for loss functions with a domain of `0 .. Inf`",
)
else
error(
"only loss functions with a domain of `-Inf .. Inf` or `0 .. Inf` are (currently) supported",
)
end
# Initialization
M0 = deepcopy(init)
u0 = vcat(vec.(M0.U)...)
# Setup vectorized objective function and gradient
vec_cutoffs = (0, cumsum(r .* size(X))...)
vec_ranges = ntuple(k -> vec_cutoffs[k]+1:vec_cutoffs[k+1], Val(N))
function f(u)
U = map(range -> reshape(view(u, range), :, r), vec_ranges)
return GCPLosses.objective(CPD(ones(T, r), U), X, loss)
end
function g!(gu, u)
U = map(range -> reshape(view(u, range), :, r), vec_ranges)
GU = map(range -> reshape(view(gu, range), :, r), vec_ranges)
GCPLosses.grad_U!(GU, CPD(ones(T, r), U), X, loss)
return gu
end
# Run LBFGSB
lbfgsopts = (; (pn => getproperty(algorithm, pn) for pn in propertynames(algorithm))...)
u = lbfgsb(f, g!, u0; lb = fill(lower, length(u0)), lbfgsopts...)[2]
U = map(range -> reshape(u[range], :, r), vec_ranges)
return CPD(ones(T, r), U)
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 1845 | ## Tensor Kernel: khatrirao
"""
khatrirao(A1, A2, ...)
Compute the Khatri-Rao product (i.e., the column-wise Kronecker product)
of the matrices `A1`, `A2`, etc.
"""
function khatrirao(A::Vararg{AbstractMatrix})
I, r = _checked_khatrirao_dims(A...)
return khatrirao!(similar(A[1], prod(I), r), A...)
end
"""
khatrirao!(K, A1, A2, ...)
Compute the Khatri-Rao product (i.e., the column-wise Kronecker product)
of the matrices `A1`, `A2`, etc. and store the result in `K`.
"""
function khatrirao!(K::AbstractMatrix, A::Vararg{AbstractMatrix,N}) where {N}
I, r = _checked_khatrirao_dims(A...)
# Check output dimensions
Base.require_one_based_indexing(K)
size(K) == (prod(I), r) || throw(
DimensionMismatch(
"Output `K` must have size equal to `(prod(size.(A,1)), size(A[1],2))",
),
)
# Compute recursively, using a good order for intermediate multiplications
if N == 1 # base case: N = 1
K .= A[1]
elseif N == 2 # base case: N = 2
reshape(K, I[2], I[1], r) .= reshape(A[2], :, 1, r) .* reshape(A[1], 1, :, r)
else # recursion: N > 2
n = argmin(n -> I[n] * I[n+1], 1:N-1)
khatrirao!(K, A[1:n-1]..., khatrirao(A[n], A[n+1]), A[n+2:end]...)
end
return K
end
"""
_checked_khatrirao_dims(A1, A2, ...)
Check that `A1`, `A2`, etc. have compatible dimensions for the Khatri-Rao product.
If so, return a tuple of the number of rows and the shared number of columns.
If not, throw an error.
"""
function _checked_khatrirao_dims(A::Vararg{AbstractMatrix})
Base.require_one_based_indexing(A...)
allequal(size.(A, 2)) || throw(
DimensionMismatch(
"Matrices in a Khatri-Rao product must have the same number of columns.",
),
)
return size.(A, 1), size(A[1], 2)
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 5080 | ## Tensor Kernel: mttkrp
"""
mttkrp(X, (U1, U2, ..., UN), n)
Compute the Matricized Tensor Times Khatri-Rao Product (MTTKRP)
of an N-way tensor X with the matrices U1, U2, ..., UN along mode n.
See also: `mttkrp!`
"""
function mttkrp(
X::AbstractArray{T,N},
U::NTuple{N,TM},
n::Integer,
) where {TM<:AbstractMatrix,T,N}
_checked_mttkrp_dims(X, U, n)
return mttkrp!(similar(U[n]), X, U, n)
end
"""
mttkrp!(G, X, (U1, U2, ..., UN), n, buffer=create_mttkrp_buffer(X, U, n))
Compute the Matricized Tensor Times Khatri-Rao Product (MTTKRP)
of an N-way tensor X with the matrices U1, U2, ..., UN along mode n
and store the result in G.
Optionally, provide a `buffer` for intermediate calculations.
Always use `create_mttkrp_buffer` to make the `buffer`;
the internal details of `buffer` may change in the future
and should not be relied upon.
Algorithm is based on Section III-B of the paper:
> **Fast Alternating LS Algorithms for High Order
> CANDECOMP/PARAFAC Tensor Factorizations**.
> Anh-Huy Phan, Petr Tichavský, Andrzej Cichocki.
> *IEEE Transactions on Signal Processing*, 2013.
> DOI: 10.1109/TSP.2013.2269903
See also: `mttkrp`, `create_mttkrp_buffer`
"""
function mttkrp!(
G::TM,
X::AbstractArray{T,N},
U::NTuple{N,TM},
n::Integer,
buffer = create_mttkrp_buffer(X, U, n),
) where {TM<:AbstractMatrix,T,N}
I, r = _checked_mttkrp_dims(X, U, n)
# Check output dimensions
Base.require_one_based_indexing(G)
size(G) == size(U[n]) ||
throw(DimensionMismatch("Output `G` must have the same size as `U[n]`"))
# Choose appropriate multiplication order:
# + n == 1: no splitting required
# + n == N: no splitting required
# + 1 < n < N: better to multiply "bigger" side out first
# + prod(I[1:n]) > prod(I[n:N]): better to multiply left-to-right
# + prod(I[1:n]) < prod(I[n:N]): better to multiply right-to-left
if n == 1
kr_right = n + 1 == N ? U[N] : khatrirao!(buffer.kr_right, U[reverse(n+1:N)]...)
mul!(G, reshape(X, I[1], :), kr_right)
elseif n == N
kr_left = n == 2 ? U[1] : khatrirao!(buffer.kr_left, U[reverse(1:n-1)]...)
mul!(G, transpose(reshape(X, :, I[N])), kr_left)
else
# Compute left and right Khatri-Rao products
kr_left = n == 2 ? U[1] : khatrirao!(buffer.kr_left, U[reverse(1:n-1)]...)
kr_right = n + 1 == N ? U[N] : khatrirao!(buffer.kr_right, U[reverse(n+1:N)]...)
if prod(I[1:n]) > prod(I[n:N])
# Inner multiplication: left side
mul!(
reshape(buffer.inner, :, r),
transpose(reshape(X, :, prod(I[n:N]))),
kr_left,
)
# Outer multiplication: right side
for j in 1:r
mul!(
view(G, :, j),
reshape(selectdim(buffer.inner, ndims(buffer.inner), j), I[n], :),
view(kr_right, :, j),
)
end
else
# Inner multiplication: right side
mul!(reshape(buffer.inner, :, r), reshape(X, prod(I[1:n]), :), kr_right)
# Outer multiplication: left side
for j in 1:r
mul!(
view(G, :, j),
transpose(
reshape(selectdim(buffer.inner, ndims(buffer.inner), j), :, I[n]),
),
view(kr_left, :, j),
)
end
end
end
return G
end
"""
create_mttkrp_buffer(X, U, n)
Create buffer to hold intermediate calculations in `mttkrp!`.
Always use `create_mttkrp_buffer` to make a `buffer` for `mttkrp!`;
the internal details of `buffer` may change in the future
and should not be relied upon.
See also: `mttkrp!`
"""
function create_mttkrp_buffer(
X::AbstractArray{T,N},
U::NTuple{N,TM},
n::Integer,
) where {TM<:AbstractMatrix,T,N}
I, r = _checked_mttkrp_dims(X, U, n)
# Allocate buffers
return (;
kr_left = n in 1:2 ? nothing : similar(U[1], prod(I[1:n-1]), r),
kr_right = n in N-1:N ? nothing : similar(U[n+1], prod(I[n+1:N]), r),
inner = n in [1, N] ? nothing :
prod(I[1:n]) > prod(I[n:N]) ? similar(U[n], I[n:N]..., r) :
similar(U[n], I[1:n]..., r),
)
end
"""
_checked_mttkrp_dims(X, (U1, U2, ..., UN), n)
Check that `X` and `U` have compatible dimensions for the mode-`n` MTTKRP.
If so, return a tuple of the number of rows and the shared number of columns
for the Khatri-Rao product. If not, throw an error.
"""
function _checked_mttkrp_dims(
X::AbstractArray{T,N},
U::NTuple{N,TM},
n::Integer,
) where {TM<:AbstractMatrix,T,N}
# Check mode
n in 1:N || throw(DimensionMismatch("`n` must be in `1:ndims(X)`"))
# Check Khatri-Rao product
I, r = _checked_khatrirao_dims(U...)
# Check tensor
Base.require_one_based_indexing(X)
(I == size(X)) ||
throw(DimensionMismatch("`X` and `U` do not have matching dimensions"))
return I, r
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 1679 | ## Tensor Kernel: mttkrps
"""
mttkrps(X, (U1, U2, ..., UN))
Compute the Matricized Tensor Times Khatri-Rao Product Sequence (MTTKRPS)
of an N-way tensor X with the matrices U1, U2, ..., UN.
See also: `mttkrps!`
"""
function mttkrps(X::AbstractArray{T,N}, U::NTuple{N,TM}) where {TM<:AbstractMatrix,T,N}
_checked_mttkrps_dims(X, U)
return mttkrps!(similar.(U), X, U)
end
"""
mttkrps!(G, X, (U1, U2, ..., UN))
Compute the Matricized Tensor Times Khatri-Rao Product Sequence (MTTKRPS)
of an N-way tensor X with the matrices U1, U2, ..., UN and store the result in G.
See also: `mttkrps`
"""
function mttkrps!(
G::NTuple{N,TM},
X::AbstractArray{T,N},
U::NTuple{N,TM},
) where {TM<:AbstractMatrix,T,N}
_checked_mttkrps_dims(X, U)
# Check output dimensions
Base.require_one_based_indexing(G...)
size.(G) == size.(U) ||
throw(DimensionMismatch("Output `G` must have the same size as `U`"))
# Compute individual MTTKRP's
for n in 1:N
mttkrp!(G[n], X, U, n)
end
return G
end
"""
_checked_mttkrps_dims(X, (U1, U2, ..., UN))
Check that `X` and `U` have compatible dimensions for the mode-`n` MTTKRP.
If so, return a tuple of the number of rows and the shared number of columns
for the Khatri-Rao product. If not, throw an error.
"""
function _checked_mttkrps_dims(
X::AbstractArray{T,N},
U::NTuple{N,TM},
) where {TM<:AbstractMatrix,T,N}
# Check Khatri-Rao product
I, r = _checked_khatrirao_dims(U...)
# Check tensor
Base.require_one_based_indexing(X)
(I == size(X)) ||
throw(DimensionMismatch("`X` and `U` do not have matching dimensions"))
return I, r
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 41 | using TestItemRunner
@run_package_tests
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 5840 | ## CP decomposition type
@testitem "constructors" begin
using OffsetArrays
@testset "T=$T, K=$K" for T in [Float64, Float16], K in 0:2
λfull = T[1, 100, 10000]
U1full, U2full, U3full = T[1 2 3; 4 5 6], T[-1 0 1], T[1 2 3; 4 5 6; 7 8 9]
λ = λfull[1:K]
U1, U2, U3 = U1full[:, 1:K], U2full[:, 1:K], U3full[:, 1:K]
# Check type for various orders
@test CPD{T,0,Vector{T},Matrix{T}}(λ, ()) isa CPD{T,0,Vector{T},Matrix{T}}
@test CPD(λ, (U1,)) isa CPD{T,1,Vector{T},Matrix{T}}
@test CPD(λ, (U1, U2)) isa CPD{T,2,Vector{T},Matrix{T}}
@test CPD(λ, (U1, U2, U3)) isa CPD{T,3,Vector{T},Matrix{T}}
# Check requirement of one-based indexing
O1, O2 = OffsetArray(U1, 0:1, 0:K-1), OffsetArray(U2, 0:0, 0:K-1)
@test_throws ArgumentError CPD(λ, (O1, O2))
# Check dimension matching (for number of components)
@test_throws DimensionMismatch CPD(λfull, (U1, U2, U3))
@test_throws DimensionMismatch CPD(λ, (U1full, U2, U3))
@test_throws DimensionMismatch CPD(λ, (U1, U2full, U3))
@test_throws DimensionMismatch CPD(λ, (U1, U2, U3full))
end
end
@testitem "ncomps" begin
λ = [1, 100, 10000]
U1, U2, U3 = [1 2 3; 4 5 6], [-1 0 1], [1 2 3; 4 5 6; 7 8 9]
@test ncomps(CPD(λ, (U1,))) ==
ncomps(CPD(λ, (U1, U2))) ==
ncomps(CPD(λ, (U1, U2, U3))) ==
3
@test ncomps(CPD(λ[1:2], (U1[:, 1:2],))) ==
ncomps(CPD(λ[1:2], (U1[:, 1:2], U2[:, 1:2]))) ==
ncomps(CPD(λ[1:2], (U1[:, 1:2], U2[:, 1:2], U3[:, 1:2]))) ==
2
@test ncomps(CPD(λ[1:1], (U1[:, 1:1],))) ==
ncomps(CPD(λ[1:1], (U1[:, 1:1], U2[:, 1:1]))) ==
ncomps(CPD(λ[1:1], (U1[:, 1:1], U2[:, 1:1], U3[:, 1:1]))) ==
1
@test ncomps(CPD(λ[1:0], (U1[:, 1:0],))) ==
ncomps(CPD(λ[1:0], (U1[:, 1:0], U2[:, 1:0]))) ==
ncomps(CPD(λ[1:0], (U1[:, 1:0], U2[:, 1:0], U3[:, 1:0]))) ==
0
end
@testitem "ndims" begin
λ = [1, 100, 10000]
U1, U2, U3 = [1 2 3; 4 5 6], [-1 0 1], [1 2 3; 4 5 6; 7 8 9]
@test ndims(CPD{Int,0,Vector{Int},Matrix{Int}}(λ, ())) == 0
@test ndims(CPD(λ, (U1,))) == 1
@test ndims(CPD(λ, (U1, U2))) == 2
@test ndims(CPD(λ, (U1, U2, U3))) == 3
end
@testitem "size" begin
λ = [1, 100, 10000]
U1, U2, U3 = [1 2 3; 4 5 6], [-1 0 1], [1 2 3; 4 5 6; 7 8 9]
@test size(CPD(λ, (U1,))) == (size(U1, 1),)
@test size(CPD(λ, (U1, U2))) == (size(U1, 1), size(U2, 1))
@test size(CPD(λ, (U1, U2, U3))) == (size(U1, 1), size(U2, 1), size(U3, 1))
M = CPD(λ, (U1, U2, U3))
@test size(M, 1) == 2
@test size(M, 2) == 1
@test size(M, 3) == 3
@test size(M, 4) == 1
end
@testitem "show / summary" begin
M = CPD(rand.(2), rand.((3, 4, 5), 2))
Mstring = sprint((t, s) -> show(t, "text/plain", s), M)
λstring = sprint((t, s) -> show(t, "text/plain", s), M.λ)
Ustrings = sprint.((t, s) -> show(t, "text/plain", s), M.U)
@test Mstring == string(
"$(summary(M))\nλ weights:\n$λstring",
["\nU[$k] factor matrix:\n$Ustring" for (k, Ustring) in enumerate(Ustrings)]...,
)
end
@testitem "getindex" begin
@testset "K=$K" for K in 0:2
T = Float64
λfull = T[1, 100, 10000]
U1full, U2full, U3full = T[1 2 3; 4 5 6], T[-1 0 1], T[1 2 3; 4 5 6; 7 8 9]
λ = λfull[1:K]
U1, U2, U3 = U1full[:, 1:K], U2full[:, 1:K], U3full[:, 1:K]
M = CPD(λ, (U1, U2, U3))
for i1 in axes(U1, 1), i2 in axes(U2, 1), i3 in axes(U3, 1)
Mi = sum(λ .* U1[i1, :] .* U2[i2, :] .* U3[i3, :])
@test Mi == M[i1, i2, i3]
@test Mi == M[CartesianIndex((i1, i2, i3))]
end
@test_throws BoundsError M[size(U1, 1)+1, 1, 1]
@test_throws BoundsError M[1, size(U2, 1)+1, 1]
@test_throws BoundsError M[1, 1, size(U3, 1)+1]
M = CPD(λ, (U1, U2))
for i1 in axes(U1, 1), i2 in axes(U2, 1)
Mi = sum(λ .* U1[i1, :] .* U2[i2, :])
@test Mi == M[i1, i2]
@test Mi == M[CartesianIndex((i1, i2))]
end
@test_throws BoundsError M[size(U1, 1)+1, 1]
@test_throws BoundsError M[1, size(U2, 1)+1]
M = CPD(λ, (U1,))
for i1 in axes(U1, 1)
Mi = sum(λ .* U1[i1, :])
@test Mi == M[i1]
@test Mi == M[CartesianIndex((i1,))]
end
@test_throws BoundsError M[size(U1, 1)+1]
end
end
@testitem "norm" begin
using LinearAlgebra
@testset "K=$K" for K in 0:2
T = Float64
λfull = T[1, 100, 10000]
U1full, U2full, U3full = T[1 2 3; 4 5 6], T[-1 0 1], T[1 2 3; 4 5 6; 7 8 9]
λ = λfull[1:K]
U1, U2, U3 = U1full[:, 1:K], U2full[:, 1:K], U3full[:, 1:K]
M = CPD(λ, (U1, U2, U3))
@test norm(M) ==
norm(M, 2) ==
sqrt(sum(abs2, M[I] for I in CartesianIndices(size(M))))
@test norm(M, 1) == sum(abs, M[I] for I in CartesianIndices(size(M)))
@test norm(M, 3) ==
(sum(m -> abs(m)^3, M[I] for I in CartesianIndices(size(M))))^(1 / 3)
M = CPD(λ, (U1, U2))
@test norm(M) ==
norm(M, 2) ==
sqrt(sum(abs2, M[I] for I in CartesianIndices(size(M))))
@test norm(M, 1) == sum(abs, M[I] for I in CartesianIndices(size(M)))
@test norm(M, 3) ==
(sum(m -> abs(m)^3, M[I] for I in CartesianIndices(size(M))))^(1 / 3)
M = CPD(λ, (U1,))
@test norm(M) ==
norm(M, 2) ==
sqrt(sum(abs2, M[I] for I in CartesianIndices(size(M))))
@test norm(M, 1) == sum(abs, M[I] for I in CartesianIndices(size(M)))
@test norm(M, 3) ==
(sum(m -> abs(m)^3, M[I] for I in CartesianIndices(size(M))))^(1 / 3)
end
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 1322 | ## LossFunctionsExt
# DistanceLoss
@testitem "LossFunctions: DistanceLoss" begin
using Random
using LossFunctions
@testset "size(X)=$sz, rank(X)=$r" for sz in [(15, 20, 25), (30, 40, 50)], r in 1:2
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
X = [M[I] for I in CartesianIndices(size(M))]
Mh = gcp(X, r; loss = L2DistLoss())
@test maximum(I -> abs(Mh[I] - X[I]), CartesianIndices(X)) <= 1e-5
end
end
# MarginLoss
@testitem "LossFunctions: MarginLoss" begin
using Random, IntervalSets
using LossFunctions
@testset "size(X)=$sz" for sz in [(15, 20, 25), (30, 40, 50)]
Random.seed!(0)
M = CPD([1], rand.(Ref([-1, 1]), sz, 1))
X = [M[I] for I in CartesianIndices(size(M))]
# Compute reference
Random.seed!(10)
Mr = gcp(
X,
1;
loss = GCPLosses.UserDefined(
(x, m) -> exp(-x * m);
deriv = (x, m) -> -x * exp(-x * m),
domain = Interval(-Inf, +Inf),
),
constraints = (),
algorithm = GCPAlgorithms.LBFGSB(),
)
# Test
Random.seed!(10)
Mh = gcp(X, 1; loss = ExpLoss())
@test maximum(I -> abs(Mh[I] - Mr[I]), CartesianIndices(X)) <= 1e-5
end
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 708 | ## Loss types
@testitem "loss constructors" begin
# LeastSquares loss
@test GCPLosses.LeastSquares() isa GCPLosses.LeastSquares
# Poisson loss
@test GCPLosses.Poisson() isa GCPLosses.Poisson{Float64}
@test GCPLosses.Poisson(1.0f-5) isa GCPLosses.Poisson{Float32}
@test_throws DomainError GCPLosses.Poisson(-0.1)
end
@testitem "value/deriv/domain methods" begin
using InteractiveUtils: subtypes
using .GCPLosses: value, deriv, domain, AbstractLoss
@testset "type=$type" for type in subtypes(AbstractLoss)
@test hasmethod(value, Tuple{type,Real,Real})
@test hasmethod(deriv, Tuple{type,Real,Real})
@test hasmethod(domain, Tuple{type})
end
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 15745 | ## GCP decomposition - full optimization
@testitem "unsupported constraints" begin
using Random, IntervalSets
sz = (15, 20, 25)
r = 2
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
X = [M[I] for I in CartesianIndices(size(M))]
# Exercise `default_constraints`
@test_throws ErrorException gcp(
X,
r;
loss = GCPLosses.UserDefined((x, m) -> (x - m)^2; domain = Interval(1, Inf)),
)
# Exercise `_gcp`
@test_throws ErrorException gcp(
X,
r;
loss = GCPLosses.LeastSquares(),
constraints = (GCPConstraints.LowerBound(1),),
)
@test_throws ErrorException gcp(X, r; loss = GCPLosses.Poisson(), constraints = ())
@test_throws ErrorException gcp(
X,
r;
loss = GCPLosses.UserDefined((x, m) -> (x - m)^2; domain = Interval(1, Inf)),
constraints = (GCPConstraints.LowerBound(1),),
)
end
@testitem "LeastSquares" begin
using Random
@testset "size(X)=$sz, rank(X)=$r" for sz in [(15, 20, 25), (50, 40, 30)], r in 1:2
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
X = [M[I] for I in CartesianIndices(size(M))]
Mh = gcp(X, r; loss = GCPLosses.LeastSquares())
@test maximum(I -> abs(Mh[I] - X[I]), CartesianIndices(X)) <= 1e-5
Xm = convert(Array{Union{Missing,eltype(X)}}, X)
Xm[1, 1, 1] = missing
Mm = gcp(Xm, r; loss = GCPLosses.LeastSquares())
@test maximum(I -> abs(Mm[I] - X[I]), CartesianIndices(X)) <= 1e-5
Mh = gcp(X, r) # test default (least-squares) loss
@test maximum(I -> abs(Mh[I] - X[I]), CartesianIndices(X)) <= 1e-5
end
# 4-way tensor to exercise recursive part of the Khatri-Rao code
@testset "size(X)=$sz, rank(X)=$r" for sz in [(50, 40, 30, 2)], r in 1:2
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
X = [M[I] for I in CartesianIndices(size(M))]
Mh = gcp(X, r; loss = GCPLosses.LeastSquares())
@test maximum(I -> abs(Mh[I] - X[I]), CartesianIndices(X)) <= 1e-5
Xm = convert(Array{Union{Missing,eltype(X)}}, X)
Xm[1, 1, 1, 1] = missing
Mm = gcp(Xm, r; loss = GCPLosses.LeastSquares())
@test maximum(I -> abs(Mm[I] - X[I]), CartesianIndices(X)) <= 1e-5
Mh = gcp(X, r) # test default (least-squares) loss
@test maximum(I -> abs(Mh[I] - X[I]), CartesianIndices(X)) <= 1e-5
end
# 5 way tensor to exercise else case in FastALS
@testset "size(X)=$sz, rank(X)=$r" for sz in [(10, 15, 20, 25, 30), (30, 25, 5, 5, 5)],
r in [2]
r = 2
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
X = [M[I] for I in CartesianIndices(size(M))]
Mh = gcp(X, r; loss = GCPLosses.LeastSquares())
@test maximum(I -> abs(Mh[I] - X[I]), CartesianIndices(X)) <= 1e-5
Xm = convert(Array{Union{Missing,eltype(X)}}, X)
Xm[1, 1, 1, 1, 1] = missing
Mm = gcp(Xm, r; loss = GCPLosses.LeastSquares())
@test maximum(I -> abs(Mm[I] - X[I]), CartesianIndices(X)) <= 1e-5
Mh = gcp(X, r) # test default (least-squares) loss
@test maximum(I -> abs(Mh[I] - X[I]), CartesianIndices(X)) <= 1e-5
end
# Test old ALS method
@testset "size(X)=$sz, rank(X)=$r" for sz in [(15, 20, 25)], r in [2]
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
X = [M[I] for I in CartesianIndices(size(M))]
Mh = gcp(X, r; loss = GCPLosses.LeastSquares(), algorithm = GCPAlgorithms.ALS())
@test maximum(I -> abs(Mh[I] - X[I]), CartesianIndices(X)) <= 1e-5
end
end
@testitem "NonnegativeLeastSquares" begin
using Random
@testset "size(X)=$sz, rank(X)=$r" for sz in [(15, 20, 25), (50, 40, 30)], r in 1:2
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
X = [M[I] for I in CartesianIndices(size(M))]
Mh = gcp(X, r; loss = GCPLosses.NonnegativeLeastSquares())
@test maximum(I -> abs(Mh[I] - X[I]), CartesianIndices(X)) <= 1e-5
Xm = convert(Array{Union{Missing,eltype(X)}}, X)
Xm[1, 1, 1] = missing
Mm = gcp(Xm, r; loss = GCPLosses.NonnegativeLeastSquares())
@test maximum(I -> abs(Mm[I] - X[I]), CartesianIndices(X)) <= 1e-5
end
end
@testitem "Poisson" begin
using Random, IntervalSets
using Distributions
@testset "size(X)=$sz, rank(X)=$r" for sz in [(15, 20, 25), (50, 40, 30)], r in 1:2
Random.seed!(0)
M = CPD(fill(10.0, r), rand.(sz, r))
X = [rand(Poisson(M[I])) for I in CartesianIndices(size(M))]
# Compute reference
Random.seed!(0)
Mr = gcp(
X,
r;
loss = GCPLosses.UserDefined(
(x, m) -> m - x * log(m + 1e-10);
deriv = (x, m) -> 1 - x / (m + 1e-10),
domain = Interval(0.0, +Inf),
),
constraints = (GCPConstraints.LowerBound(0.0),),
algorithm = GCPAlgorithms.LBFGSB(),
)
# Test
Random.seed!(0)
Mh = gcp(X, r; loss = GCPLosses.Poisson())
@test maximum(I -> abs(Mh[I] - Mr[I]), CartesianIndices(X)) <= 1e-5
end
end
@testitem "PoissonLog" begin
using Random, IntervalSets
using Distributions
@testset "size(X)=$sz, rank(X)=$r" for sz in [(15, 20, 25), (50, 40, 30)], r in 1:2
Random.seed!(0)
M = CPD(ones(r), randn.(sz, r))
X = [rand(Poisson(exp(M[I]))) for I in CartesianIndices(size(M))]
# Compute reference
Random.seed!(0)
Mr = gcp(
X,
r;
loss = GCPLosses.UserDefined(
(x, m) -> exp(m) - x * m;
deriv = (x, m) -> exp(m) - x,
domain = Interval(-Inf, +Inf),
),
constraints = (),
algorithm = GCPAlgorithms.LBFGSB(),
)
# Test
Random.seed!(0)
Mh = gcp(X, r; loss = GCPLosses.PoissonLog())
@test maximum(I -> abs(Mh[I] - Mr[I]), CartesianIndices(X)) <= 1e-5
end
end
@testitem "Gamma" begin
using Random, IntervalSets
using Distributions
@testset "size(X)=$sz, rank(X)=$r" for sz in [(15, 20, 25), (50, 40, 30)], r in 1:2
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
k = 1.5
X = [rand(Gamma(k, M[I] / k)) for I in CartesianIndices(size(M))]
# Compute reference
Random.seed!(0)
Mr = gcp(
X,
r;
loss = GCPLosses.UserDefined(
(x, m) -> log(m + 1e-10) + x / (m + 1e-10);
deriv = (x, m) -> -1 * (x / (m + 1e-10)^2) + (1 / (m + 1e-10)),
domain = Interval(0.0, +Inf),
),
constraints = (GCPConstraints.LowerBound(0.0),),
algorithm = GCPAlgorithms.LBFGSB(),
)
# Test
Random.seed!(0)
Mh = gcp(X, r; loss = GCPLosses.Gamma())
@test maximum(I -> abs(Mh[I] - Mr[I]), CartesianIndices(X)) <= 1e-5
end
end
@testitem "Rayleigh" begin
using Random, IntervalSets
using Distributions
@testset "size(X)=$sz, rank(X)=$r" for sz in [(15, 20, 25), (50, 40, 30)], r in 1:2
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
X = [rand(Rayleigh(M[I] / (sqrt(pi / 2)))) for I in CartesianIndices(size(M))]
# Compute reference
Random.seed!(0)
Mr = gcp(
X,
r;
loss = GCPLosses.UserDefined(
(x, m) -> 2 * log(m + 1e-10) + (pi / 4) * ((x / (m + 1e-10))^2);
deriv = (x, m) -> 2 / (m + 1e-10) - (pi / 2) * (x^2 / (m + 1e-10)^3),
domain = Interval(0.0, +Inf),
),
constraints = (GCPConstraints.LowerBound(0.0),),
algorithm = GCPAlgorithms.LBFGSB(),
)
# Test
Random.seed!(0)
Mh = gcp(X, r; loss = GCPLosses.Rayleigh())
@test maximum(I -> abs(Mh[I] - Mr[I]), CartesianIndices(X)) <= 1e-5
end
end
@testitem "BernoulliOdds" begin
using Random, IntervalSets
using Distributions
@testset "size(X)=$sz, rank(X)=$r" for sz in [(15, 20, 25), (50, 40, 30)], r in 1:2
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
X = [rand(Bernoulli(M[I] / (M[I] + 1))) for I in CartesianIndices(size(M))]
# Compute reference
Random.seed!(0)
Mr = gcp(
X,
r;
loss = GCPLosses.UserDefined(
(x, m) -> log(m + 1) - x * log(m + 1e-10);
deriv = (x, m) -> 1 / (m + 1) - (x / (m + 1e-10)),
domain = Interval(0.0, +Inf),
),
constraints = (GCPConstraints.LowerBound(0.0),),
algorithm = GCPAlgorithms.LBFGSB(),
)
# Test
Random.seed!(0)
Mh = gcp(X, r; loss = GCPLosses.BernoulliOdds())
@test maximum(I -> abs(Mh[I] - Mr[I]), CartesianIndices(X)) <= 1e-5
end
end
@testitem "BernoulliLogitsLoss" begin
using Random, IntervalSets
using Distributions
@testset "size(X)=$sz, rank(X)=$r" for sz in [(15, 20, 25), (50, 40, 30)], r in 1:2
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
X = [
rand(Bernoulli(exp(M[I]) / (exp(M[I]) + 1))) for I in CartesianIndices(size(M))
]
# Compute reference
Random.seed!(0)
Mr = gcp(
X,
r;
loss = GCPLosses.UserDefined(
(x, m) -> log(1 + exp(m)) - x * m;
deriv = (x, m) -> exp(m) / (1 + exp(m)) - x,
domain = Interval(-Inf, +Inf),
),
constraints = (),
algorithm = GCPAlgorithms.LBFGSB(),
)
# Test
Random.seed!(0)
Mh = gcp(X, r; loss = GCPLosses.BernoulliLogit())
@test maximum(I -> abs(Mh[I] - Mr[I]), CartesianIndices(X)) <= 1e-5
end
end
@testitem "NegativeBinomialOdds" begin
using Random, IntervalSets
using Distributions
@testset "size(X)=$sz, rank(X)=$r" for sz in [(15, 20, 25), (50, 40, 30)], r in 1:2
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
num_failures = 5
X = [
rand(NegativeBinomial(num_failures, M[I] / (M[I] + 1))) for
I in CartesianIndices(size(M))
]
# Compute reference
Random.seed!(0)
Mr = gcp(
X,
r;
loss = GCPLosses.UserDefined(
(x, m) -> (num_failures + x) * log(1 + m) - x * log(m + 1e-10);
deriv = (x, m) -> (num_failures + x) / (1 + m) - x / (m + 1e-10),
domain = Interval(0.0, +Inf),
),
constraints = (GCPConstraints.LowerBound(0.0),),
algorithm = GCPAlgorithms.LBFGSB(),
)
# Test
Random.seed!(0)
Mh = gcp(X, r; loss = GCPLosses.NegativeBinomialOdds(num_failures))
@test maximum(I -> abs(Mh[I] - Mr[I]), CartesianIndices(X)) <= 1e-5
end
end
@testitem "Huber" begin
using Random, IntervalSets
using Distributions
@testset "size(X)=$sz, rank(X)=$r" for sz in [(15, 20, 25), (50, 40, 30)], r in 1:2
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
X = [M[I] for I in CartesianIndices(size(M))]
# Compute reference
Δ = 1
Random.seed!(0)
Mr = gcp(
X,
r;
loss = GCPLosses.UserDefined(
(x, m) -> abs(x - m) <= Δ ? (x - m)^2 : 2 * Δ * abs(x - m) - Δ^2;
deriv = (x, m) ->
abs(x - m) <= Δ ? -2 * (x - m) : -2 * sign(x - m) * Δ * x,
domain = Interval(-Inf, +Inf),
),
constraints = (),
algorithm = GCPAlgorithms.LBFGSB(),
)
# Test
Random.seed!(0)
Mh = gcp(X, r; loss = GCPLosses.Huber(Δ))
@test maximum(I -> abs(Mh[I] - Mr[I]), CartesianIndices(X)) <= 1e-5
end
end
@testitem "BetaDivergence" begin
using Random, IntervalSets
using Distributions
@testset "size(X)=$sz, rank(X)=$r, β" for sz in [(15, 20, 25), (50, 40, 30)],
r in 1:2,
β in [0, 0.5, 1]
Random.seed!(0)
M = CPD(ones(r), rand.(sz, r))
# May want to consider other distributions depending on value of β
X = [rand(Poisson(M[I])) for I in CartesianIndices(size(M))]
function beta_value(β, x, m)
if β == 0
return x / (m + 1e-10) + log(m + 1e-10)
elseif β == 1
return m - x * log(m + 1e-10)
else
return 1 / β * m^β - 1 / (β - 1) * x * m^(β - 1)
end
end
function beta_deriv(β, x, m)
if β == 0
return -x / (m + 1e-10)^2 + 1 / (m + 1e-10)
elseif β == 1
return 1 - x / (m + 1e-10)
else
return m^(β - 1) - x * m^(β - 2)
end
end
# Compute reference
Random.seed!(0)
Mr = gcp(
X,
r;
loss = GCPLosses.UserDefined(
(x, m) -> beta_value(β, x, m);
deriv = (x, m) -> beta_deriv(β, x, m),
domain = Interval(0.0, +Inf),
),
constraints = (GCPConstraints.LowerBound(0.0),),
algorithm = GCPAlgorithms.LBFGSB(),
)
# Test
Random.seed!(0)
Mh = gcp(X, r; loss = GCPLosses.BetaDivergence(β))
@test maximum(I -> abs(Mh[I] - Mr[I]), CartesianIndices(X)) <= 1e-5
end
end
@testitem "UserDefined" begin
using Random, Distributions, IntervalSets
@testset "Least Squares" begin
@testset "size(X)=$sz, rank(X)=$r" for sz in [(15, 20, 25), (50, 40, 30)], r in 1:2
Random.seed!(0)
M = CPD(ones(r), randn.(sz, r))
X = [M[I] for I in CartesianIndices(size(M))]
# Compute reference
Random.seed!(0)
Mr = gcp(
X,
r;
loss = GCPLosses.UserDefined(
(x, m) -> (x - m)^2;
deriv = (x, m) -> 2 * (m - x),
domain = Interval(-Inf, +Inf),
),
constraints = (),
algorithm = GCPAlgorithms.LBFGSB(),
)
# Test
Random.seed!(0)
Mh = gcp(X, r; loss = GCPLosses.UserDefined((x, m) -> (x - m)^2))
@test maximum(I -> abs(Mh[I] - Mr[I]), CartesianIndices(X)) <= 1e-5
end
end
@testset "Poisson" begin
@testset "size(X)=$sz, rank(X)=$r" for sz in [(15, 20, 25), (50, 40, 30)], r in 1:2
Random.seed!(0)
M = CPD(fill(10.0, r), rand.(sz, r))
X = [rand(Poisson(M[I])) for I in CartesianIndices(size(M))]
# Compute reference
Random.seed!(0)
Mr = gcp(
X,
r;
loss = GCPLosses.UserDefined(
(x, m) -> m - x * log(m + 1e-10);
deriv = (x, m) -> 1 - x / (m + 1e-10),
domain = Interval(0.0, +Inf),
),
constraints = (GCPConstraints.LowerBound(0.0),),
algorithm = GCPAlgorithms.LBFGSB(),
)
# Test
Random.seed!(0)
Mh = gcp(
X,
r;
loss = GCPLosses.UserDefined(
(x, m) -> m - x * log(m + 1e-10);
domain = 0.0 .. Inf,
),
)
@test maximum(I -> abs(Mh[I] - Mr[I]), CartesianIndices(X)) <= 1e-5
end
end
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | code | 1547 | ## Tensor Kernels
@testitem "mttkrp" begin
using Random
using GCPDecompositions.TensorKernels
@testset "size=$sz, rank=$r" for sz in [(10, 30, 40)], r in [5]
Random.seed!(0)
X = randn(sz)
U = randn.(sz, r)
N = length(sz)
for n in 1:N
Xn = reshape(permutedims(X, [n; setdiff(1:N, n)]), size(X, n), :)
Zn = reduce(
hcat,
[reduce(kron, [U[i][:, j] for i in reverse(setdiff(1:N, n))]) for j in 1:r],
)
@test mttkrp(X, U, n) ≈ Xn * Zn
end
end
end
@testitem "khatrirao" begin
using Random
using GCPDecompositions.TensorKernels
@testset "size=$sz, rank=$r" for sz in [(10,), (10, 20), (10, 30, 40)], r in [5]
Random.seed!(0)
U = randn.(sz, r)
Zn = reduce(hcat, [reduce(kron, [Ui[:, j] for Ui in U]) for j in 1:r])
@test khatrirao(U...) ≈ Zn
end
end
@testitem "mttkrps" begin
using Random
using GCPDecompositions.TensorKernels
@testset "size=$sz, rank=$r" for sz in [(10, 30), (10, 30, 40)], r in [5]
Random.seed!(0)
X = randn(sz)
U = randn.(sz, r)
N = length(sz)
G = map(1:N) do n
Xn = reshape(permutedims(X, [n; setdiff(1:N, n)]), size(X, n), :)
Zn = reduce(
hcat,
[reduce(kron, [U[i][:, j] for i in reverse(setdiff(1:N, n))]) for j in 1:r],
)
return Xn * Zn
end
@test all(mttkrps(X, U) .≈ G)
end
end
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | docs | 2756 | # GCPDecompositions: Generalized CP Decompositions
[](https://juliahub.com/ui/Packages/GCPDecompositions/HR3AK)
[](https://dahong67.github.io/GCPDecompositions.jl/stable/)
[](https://dahong67.github.io/GCPDecompositions.jl/dev/)
[](https://www.repostatus.org/#wip)
[](https://juliahub.com/ui/Packages/GCPDecompositions/HR3AK)
[](https://github.com/dahong67/GCPDecompositions.jl/actions/workflows/CI.yml?query=branch%3Amaster)
[](https://codecov.io/gh/dahong67/GCPDecompositions.jl)
<!-- [](https://pkgs.genieframework.com?packages=GCPDecompositions) -->
> 👋 *This package provides research code and work is ongoing.
> If you are interested in using it in your own research,
> **I'd love to hear from you and collaborate!**
> Feel free to write: [email protected]*
Please cite the following papers for this technique:
> David Hong, Tamara G. Kolda, Jed A. Duersch.
> "Generalized Canonical Polyadic Tensor Decomposition",
> *SIAM Review* 62:133-163, 2020.
> https://doi.org/10.1137/18M1203626
> https://arxiv.org/abs/1808.07452
>
> Tamara G. Kolda, David Hong.
> "Stochastic Gradients for Large-Scale Tensor Decomposition",
> *SIAM Journal on Mathematics of Data Science* 2:1066-1095, 2020.
> https://doi.org/10.1137/19M1266265
> https://arxiv.org/abs/1906.01687
In BibTeX form:
```bibtex
@Article{hkd2020gcp,
title = "Generalized Canonical Polyadic Tensor Decomposition",
author = "David Hong and Tamara G. Kolda and Jed A. Duersch",
journal = "{SIAM} Review",
year = "2020",
volume = "62",
number = "1",
pages = "133--163",
DOI = "10.1137/18M1203626",
}
@Article{kh2020sgf,
title = "Stochastic Gradients for Large-Scale Tensor Decomposition",
author = "Tamara G. Kolda and David Hong",
journal = "{SIAM} Journal on Mathematics of Data Science",
year = "2020",
volume = "2",
number = "4",
pages = "1066--1095",
DOI = "10.1137/19M1266265",
}
```
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | docs | 1745 | # GCPDecompositions: Generalized CP Decompositions
Documentation for [GCPDecompositions](https://github.com/dahong67/GCPDecompositions.jl).
> 👋 *This package provides research code and work is ongoing.
> If you are interested in using it in your own research,
> **I'd love to hear from you and collaborate!**
> Feel free to write: [[email protected]](mailto:[email protected])*
Please cite the following papers for this technique:
> David Hong, Tamara G. Kolda, Jed A. Duersch.
> "Generalized Canonical Polyadic Tensor Decomposition",
> *SIAM Review* 62:133-163, 2020.
> [https://doi.org/10.1137/18M1203626](https://doi.org/10.1137/18M1203626)
> [https://arxiv.org/abs/1808.07452](https://arxiv.org/abs/1808.07452)
>
> Tamara G. Kolda, David Hong.
> "Stochastic Gradients for Large-Scale Tensor Decomposition",
> *SIAM Journal on Mathematics of Data Science* 2:1066-1095, 2020.
> [https://doi.org/10.1137/19M1266265](https://doi.org/10.1137/19M1266265)
> [https://arxiv.org/abs/1906.01687](https://arxiv.org/abs/1906.01687)
In BibTeX form:
```bibtex
@Article{hkd2020gcp,
title = "Generalized Canonical Polyadic Tensor Decomposition",
author = "David Hong and Tamara G. Kolda and Jed A. Duersch",
journal = "{SIAM} Review",
year = "2020",
volume = "62",
number = "1",
pages = "133--163",
DOI = "10.1137/18M1203626",
}
@Article{kh2020sgf,
title = "Stochastic Gradients for Large-Scale Tensor Decomposition",
author = "Tamara G. Kolda and David Hong",
journal = "{SIAM} Journal on Mathematics of Data Science",
year = "2020",
volume = "2",
number = "4",
pages = "1066--1095",
DOI = "10.1137/19M1266265",
}
```
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | docs | 4785 | # Quick start guide
Let's install GCPDecompositions
and run our first **G**eneralized **CP** Tensor **Decomposition**!
## Step 1: Install Julia
Go to [https://julialang.org/downloads](https://julialang.org/downloads)
and install the current stable release.
To check your installation,
open up Julia and try a simple calculation like `1+1`:
```@repl
1 + 1
```
More info: [https://docs.julialang.org/en/v1/manual/getting-started/](https://docs.julialang.org/en/v1/manual/getting-started/)
## Step 2: Install GCPDecompositions
GCPDecompositions can be installed using
Julia's excellent builtin package manager.
```julia-repl
julia> import Pkg; Pkg.add("GCPDecompositions")
```
This downloads, installs, and precompiles GCPDecompositions
(and all its dependencies).
Don't worry if it takes a few minutes to complete.
!!! tip "Tip: Interactive package management with the Pkg REPL mode"
Here we used the [functional API](https://pkgdocs.julialang.org/v1/api/)
for the builtin package manager.
Pkg also has a very nice interactive interface (called the Pkg REPL)
that is built right into the Julia REPL!
Learn more here: [https://pkgdocs.julialang.org/v1/getting-started/](https://pkgdocs.julialang.org/v1/getting-started/)
!!! tip "Tip: Pkg environments"
The package manager has excellent support
for creating separate installation environments.
We strongly recommend using environments to create
isolated and reproducible setups.
Learn more here: [https://pkgdocs.julialang.org/v1/environments/](https://pkgdocs.julialang.org/v1/environments/)
## Step 3: Run GCPDecompositions
Let's create a simple three-way (a.k.a. order-three) data tensor
that has a rank-one signal plus noise!
```@repl quickstart
dims = (10, 20, 30)
X = ones(dims) + randn(dims); # semicolon suppresses the output
```
Mathematically,
this data tensor can be written as follows:
```math
X
=
\underbrace{
\mathbf{1}_{10} \circ \mathbf{1}_{20} \circ \mathbf{1}_{30}
}_{\text{rank-one signal}}
+
N
\in
\mathbb{R}^{10 \times 20 \times 30}
,
```
where
``\mathbf{1}_{n} \in \mathbb{R}^n`` denotes a vector of ``n`` ones,
``\circ`` denotes the outer product,
and
``N_{ijk} \overset{iid}{\sim} \mathcal{N}(0,1)``.
Now, to get a rank ``r=1`` GCP decomposition simply load the package
and run `gcp`.
```@repl quickstart
using GCPDecompositions
r = 1 # desired rank
M = gcp(X, r)
```
This returns a `CPD` (short for **CP** **D**ecomposition)
with weights ``\lambda`` and factor matrices ``U_1``, ``U_2``, and ``U_3``.
Mathematically, this is the following decomposition
(read [Overview](@ref) to learn more):
```math
M
=
\sum_{i=1}^{r}
\lambda[i]
\cdot
U_1[:,i] \circ U_2[:,i] \circ U_3[:,i]
\in
\mathbb{R}^{10 \times 20 \times 30}
.
```
We can extract each of these as follows:
```@repl quickstart
M.λ # to write `λ`, type `\lambda` then hit tab
M.U[1]
M.U[2]
M.U[3]
```
Let's check how close
the factor matrices ``U_1``, ``U_2``, and ``U_3``
(which were estimated from the noisy data)
are to the true signal components
``\mathbf{1}_{10}``, ``\mathbf{1}_{20}``, and ``\mathbf{1}_{30}``.
We use the angle between the vectors
since the scale of each factor matrix isn't meaningful on its own
(read [Overview](@ref) to learn more):
```@repl quickstart
using LinearAlgebra: normalize
vecangle(u, v) = acos(normalize(u)'*normalize(v)) # angle between vectors in radians
vecangle(M.U[1][:,1], ones(10))
vecangle(M.U[2][:,1], ones(20))
vecangle(M.U[3][:,1], ones(30))
```
The decomposition does a pretty good job
of extracting the signal from the noise!
The power of **Generalized** CP Decomposition
is that we can fit CP decompositions to data
using different losses (i.e., different notions of fit).
By default, `gcp` uses the (conventional) least-squares loss,
but we can easily try another!
For example,
to try non-negative least-squares simply run
```@repl quickstart
M_nonneg = gcp(X, 1; loss = GCPLosses.NonnegativeLeastSquares())
```
!!! tip "Congratulations!"
Congratulations!
You have successfully installed GCPDecompositions
and run some Generalized CP decompositions!
## Next steps
Ready to learn more?
- If you are new to tensor decompositions (or to GCP decompositions in particular), check out the [Overview](@ref) page in the manual. Also check out some of the demos!
- To learn about all the different loss functions you can use or about how to add your own, check out the [Loss functions](@ref) page in the manual.
- To learn about different constraints you can add, check out the [Constraints](@ref) page in the manual.
- To learn about different algorithms you can choose, check out the [Algorithms](@ref) page in the manual.
Want to understand the internals and possibly contribute?
Check out the developer docs.
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | docs | 127 | # Overview of demos
!!! warning "Work-in-progress"
This page of the docs is still a work-in-progress. Check back later!
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | docs | 467 | # Tensor Kernels
!!! warning "Work-in-progress"
This page of the docs is still a work-in-progress. Check back later!
```@docs
GCPDecompositions.TensorKernels
GCPDecompositions.TensorKernels.khatrirao
GCPDecompositions.TensorKernels.khatrirao!
GCPDecompositions.TensorKernels.mttkrp
GCPDecompositions.TensorKernels.mttkrp!
GCPDecompositions.TensorKernels.create_mttkrp_buffer
GCPDecompositions.TensorKernels.mttkrps
GCPDecompositions.TensorKernels.mttkrps!
```
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | docs | 322 | # Private functions
!!! warning "Work-in-progress"
This page of the docs is still a work-in-progress. Check back later!
```@docs
GCPAlgorithms._gcp
GCPDecompositions.TensorKernels._checked_khatrirao_dims
GCPDecompositions.TensorKernels._checked_mttkrp_dims
GCPDecompositions.TensorKernels._checked_mttkrps_dims
```
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | docs | 331 | # Algorithms
!!! warning "Work-in-progress"
This page of the docs is still a work-in-progress. Check back later!
```@docs
GCPAlgorithms
GCPAlgorithms.AbstractAlgorithm
```
```@autodocs
Modules = [GCPAlgorithms]
Filter = t -> t in subtypes(GCPAlgorithms.AbstractAlgorithm) || (t isa Function && t != GCPAlgorithms._gcp)
```
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | docs | 291 | # Constraints
!!! warning "Work-in-progress"
This page of the docs is still a work-in-progress. Check back later!
```@docs
GCPConstraints
GCPConstraints.AbstractConstraint
```
```@autodocs
Modules = [GCPConstraints]
Filter = t -> t in subtypes(GCPConstraints.AbstractConstraint)
```
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | docs | 363 | # Loss functions
!!! warning "Work-in-progress"
This page of the docs is still a work-in-progress. Check back later!
```@docs
GCPLosses
GCPLosses.AbstractLoss
```
```@autodocs
Modules = [GCPLosses]
Filter = t -> t in subtypes(GCPLosses.AbstractLoss)
```
```@docs
GCPLosses.value
GCPLosses.deriv
GCPLosses.domain
GCPLosses.objective
GCPLosses.grad_U!
```
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.3.0 | 02db2030eaba15350b05279e79b86bbd4d3a39bc | docs | 269 | # Overview
!!! warning "Work-in-progress"
This page of the docs is still a work-in-progress. Check back later!
```@docs
GCPDecompositions
gcp
CPD
ncomps
GCPDecompositions.default_constraints
GCPDecompositions.default_algorithm
GCPDecompositions.default_init
```
| GCPDecompositions | https://github.com/dahong67/GCPDecompositions.jl.git |
|
[
"MIT"
] | 0.2.1 | 6512022b637fc17c297bb3050ba2e6cdd50b6fa0 | code | 350 | using Documenter, BGEN
makedocs(
format = Documenter.HTML(),
sitename = "BGEN.jl",
authors = "Seyoon Ko",
clean = true,
debug = true,
pages = [
"index.md"
]
)
deploydocs(
repo = "github.com/OpenMendel/BGEN.jl.git",
target = "build",
deps = nothing,
make = nothing,
devbranch = "main"
)
| BGEN | https://github.com/OpenMendel/BGEN.jl.git |
|
[
"MIT"
] | 0.2.1 | 6512022b637fc17c297bb3050ba2e6cdd50b6fa0 | code | 1468 | module BGEN
import Base: length, getindex, setindex, firstindex, lastindex, eltype, size,
iterate, close, Iterators.filter
import Tables: columntable
import Statistics: mean
import SpecialFunctions: gamma_inc
import TranscodingStreams: initialize, finalize, buffermem, process, Buffer, Error
import GeneticVariantBase: GeneticData, Variant, VariantIterator, iterator
import GeneticVariantBase: chrom, pos, rsid, alleles, alt_allele, ref_allele
import GeneticVariantBase: maf, hwepval, infoscore, alt_dosages!
export Bgen, Samples, Variant, Genotypes, Index
export io, fsize, samples, n_samples, n_variants, compression
export varid, rsid, chrom, pos, n_alleles, alleles, minor_allele, major_allele
export phased, min_ploidy, max_ploidy, ploidy, bit_depth, missings
export parse_variants, iterator, probabilities!, minor_allele_dosage!
export first_allele_dosage!, clear!, hardcall, hardcall!
export select_region, variant_by_rsid, variant_by_pos, variant_by_index
export rsids, chroms, positions
export hwe, maf, info_score, counts!
export BgenVariantIteratorFromStart, BgenVariantIteratorFromOffsets
using CodecZlib, CodecZstd, SQLite, SIMD
include("structs.jl")
include("iterator.jl")
include("header.jl")
include("minor_certain.jl")
include("sample.jl")
include("variant.jl")
include("bgen_ftns.jl")
include("genotypes.jl")
include("index.jl")
include("utils.jl")
include("filter.jl")
datadir(parts...) = joinpath(@__DIR__, "..", "data", parts...)
end
| BGEN | https://github.com/OpenMendel/BGEN.jl.git |
|
[
"MIT"
] | 0.2.1 | 6512022b637fc17c297bb3050ba2e6cdd50b6fa0 | code | 5191 | """
Bgen(path; sample_path=nothing, delay_parsing=false)
Read in the Bgen file information: header, list of samples.
Variants and genotypes are read separately.
- `path`: path to the ".bgen" file.
- `sample_path`: path to ".sample" file, if applicable.
- `idx_path`: path to ".bgi" file, defaults to `path * ".bgi`.
"""
function Bgen(path::AbstractString;
sample_path = nothing,
idx_path = isfile(path * ".bgi") ? path * ".bgi" : nothing,
ref_first = true
)
io = open(path)
fsize = filesize(path)
# read header
header = Header(io)
# read samples
if sample_path !== nothing
samples = get_samples(sample_path, header.n_samples)
# disregard sample names in the header if .sample file is provided
if header.has_sample_ids
header_sample_length = read(io, UInt32)
read(io, header_sample_length)
end
elseif header.has_sample_ids
samples = get_samples(io, header.n_samples)
else
samples = get_samples(header.n_samples)
end
offset = header.offset + 4 # location of the first variant_ids
if idx_path !== nothing
if isfile(idx_path)
idx = Index(idx_path)
else
@error "$idx_path is not a file"
end
else
idx = nothing
end
Bgen(io, fsize, header, samples, idx, ref_first)
end
@inline io(b::Bgen) = b.io
Base.close(b::Bgen) = close(b.io)
@inline fsize(b::Bgen)::Int = b.fsize
@inline samples(b::Bgen) = b.samples
@inline n_samples(b::Bgen)::Int = b.header.n_samples
@inline n_variants(b::Bgen)::Int = b.header.n_variants
const compression_modes = ["None", "Zlib", "Zstd"]
@inline function compression(b::Bgen)
compression_modes[b.header.compression + 1]
end
"""
iterator(b::Bgen; offsets=nothing, from_bgen_start=nothing)
Retrieve a variant iterator for `b`.
- If `offsets` is provided, or `.bgen.bgi` is provided and
`from_bgen_start` is `false`, it returns a `VariantIteratorFromOffsets`,
iterating over the list of offsets.
- Otherwise, it returns a `VariantIteratorFromStart`, iterating from the start
of bgen file to the end of it sequentially.
"""
function iterator(b::Bgen; offsets=nothing, from_bgen_start=false)
if offsets === nothing
if b.idx === nothing || from_bgen_start
return BgenVariantIteratorFromStart(b)
else
return BgenVariantIteratorFromOffsets(b, BGEN.offsets(b.idx))
end
else
return BgenVariantIteratorFromOffsets(b, offsets)
end
end
"""
offset_first_variant(x)
returns the offset of the first variant
"""
@inline function offset_first_variant(x::Bgen)
return x.header.offset + 4
end
"""
parse_variants(b::Bgen; offsets=offsets)
Parse variants of the file.
"""
function parse_variants(b::Bgen; offsets=nothing, from_bgen_start=false)
collect(iterator(b; offsets=offsets, from_bgen_start=from_bgen_start))
end
function parse_variants(v::BgenVariantIterator)
collect(v)
end
"""
rsids(vi)
rsids(b; offsets=nothing, from_bgen_start=false)
Get rsid list of all variants.
Arguments:
- `vi`: a collection of `Variant`s
- `bgen`: `Bgen` object
- `offsets`: offset of each variant to be returned
"""
function rsids(b::Bgen; offsets=nothing, from_bgen_start=false)
if b.idx !== nothing && offsets === nothing
return rsids(b.idx)
else
vi = iterator(b; offsets=offsets, from_bgen_start=from_bgen_start)
rsids(vi)
end
end
function rsids(vi::BgenVariantIterator)
collect(v.rsid for v in vi)
end
"""
chroms(vi)
chroms(bgen; offsets=nothing)
Get chromosome list of all variants.
Arguments:
- `vi`: a collection of `Variant`s
- `bgen`: `Bgen` object
- `offsets`: offset of each variant to be returned
"""
function chroms(b::Bgen; vi=nothing, offsets=nothing, from_bgen_start=false)
if b.idx !== nothing && offsets === nothing
return chroms(b.idx)
else
vi = iterator(b; offsets=offsets, from_bgen_start=from_bgen_start)
chroms(vi)
end
end
function chroms(vi::BgenVariantIterator)
collect(v.chrom for v in vi)
end
function chrom(b::Bgen, v::BgenVariant)
chrom(v)
end
"""
positions(vi)
positions(bgen; offsets=nothing)
Get base pair positions of all variants.
Arguments:
- `vi`: a collection of `Variant`s
- `bgen`: `Bgen` object
- `offsets`: offset of each variant to be returned
"""
function positions(b::Bgen; offsets=nothing,
from_bgen_start=false)::Vector{Int}
if b.idx !== nothing && offsets === nothing
return positions(b.idx)
else
vi = iterator(b; offsets=offsets, from_bgen_start=from_bgen_start)
positions(vi)
end
end
function positions(vi::BgenVariantIterator)
collect(v.pos for v in vi)
end
function pos(b::Bgen, v::BgenVariant)
pos(v)
end
function rsid(b::Bgen, v::BgenVariant)
rsid(v)
end
function alleles(b::Bgen, v::BgenVariant)
alleles(v)
end
function alt_allele(b::Bgen, v::BgenVariant)
allele_list = alleles(v)
b.ref_first ? allele_list[2] : allele_list[1]
end
function ref_allele(b::Bgen, v::BgenVariant)
allele_list = alleles(v)
b.ref_first ? allele_list[1] : allele_list[2]
end
| BGEN | https://github.com/OpenMendel/BGEN.jl.git |
|
[
"MIT"
] | 0.2.1 | 6512022b637fc17c297bb3050ba2e6cdd50b6fa0 | code | 6669 | """
filter(dest::AbstractString, b::Bgen, variant_mask::BitVector, sample_mask::BitVector;
dest_sample=dest[1:end-5] * ".sample",
sample_path=nothing, sample_names=b.samples,
offsets=nothing, from_bgen_start=false)
Filters the input Bgen instance `b` based on `variant_mask` and `sample_mask`. The result
is saved in the new bgen file `dest`. Sample information is stored in `dest_sample`.
`sample_path` is the path of the `.sample` file for the input BGEN file, and
`sample_names` stores the sample names in the BGEN file.
`offsets` and `from_bgen_start` are arguments for the `iterator` function of `b`.
Only supports layout 2 and probibility bit depths should always be a multiple of 8.
The output is always compressed in ZSTD. The sample names are stored in a separate .sample file,
but not in the output .bgen file.
"""
function filter(dest::AbstractString, b::Bgen, variant_mask::BitVector,
sample_mask::BitVector=trues(length(b.samples));
dest_sample = dest[1:end-5] * ".sample",
sample_path=nothing, sample_names=b.samples,
offsets=nothing, from_bgen_start=false, use_zlib=false)
@assert endswith(dest, ".bgen") "must use .bgen file"
@assert b.header.layout == 2 "only layout 2 is supported."
@assert length(variant_mask) == b.header.n_variants
@assert length(sample_mask) == b.header.n_samples
filter_samples(dest_sample, sample_mask; sample_path=sample_path, sample_names=sample_names)
open(dest, "w") do io
write(io, UInt32(20)) # offset, defaults to 20.
write(io, UInt32(20)) # length of header in bytes, defaults to 20.
write(io, UInt32(sum(variant_mask))) # number of variants
write(io, UInt32(sum(sample_mask))) # number of samples
write(io, Vector{UInt8}("bgen")) # magic number
if !use_zlib
flag = 0x0000000a # zstd, layout 2, do not store sample info
else
flag = 0x00000009 # zlib, layout 2, do not store sample info
end
write(io, flag)
v_it = iterator(b; offsets=offsets, from_bgen_start=from_bgen_start)
for (i, v) in enumerate(v_it)
if variant_mask[i]
write_variant(io, b, v, sample_mask; use_zlib=use_zlib)
end
end
end
end
function write_variant(io::IOStream, b::Bgen, v::BgenVariant, sample_mask::BitVector; use_zlib=false)
write(io, UInt16(length(v.varid))) # length of varid
write(io, v.varid) # varid
write(io, UInt16(length(v.rsid))) # length of rsid
write(io, v.rsid) # rsid
write(io, UInt16(length(v.chrom))) # length of chrom
write(io, v.chrom) # chrom
write(io, v.pos) # position
write(io, v.n_alleles) # n of alleles
for a_idx in 1:v.n_alleles
write(io, UInt32(length(v.alleles[a_idx])))
write(io, v.alleles[a_idx])
end
decompressed = decompress(b.io, v, b.header)
if !all(sample_mask) # parse decompressed genotype, skip if all samples are chosen
n_samples_new = sum(sample_mask)
p = parse_preamble(decompressed, b.header, v)
@assert p.bit_depth % 8 == 0 # probabilities should be byte aligned
decompressed_new = Vector{UInt8}(undef, get_decompressed_length(p, decompressed, sample_mask))
decompressed_new[1:4] = reinterpret(UInt8, UInt32[n_samples_new]) # update number of samples
decompressed_new[5:6] .= decompressed[5:6] # number of alleles
ploidy_old = @view decompressed[9 : 8 + p.n_samples]
decompressed_new[9 : 8 + n_samples_new] = ploidy_old[sample_mask] # extract ploidies
ploidy_new = @view(decompressed_new[9 : 8 + n_samples_new]) .& 0x3f
decompressed_new[7] = minimum(ploidy_new) # min ploidy
decompressed_new[8] = maximum(ploidy_new) # max ploidy
# phased flag
decompressed_new[8 + n_samples_new + 1] = decompressed[8 + p.n_samples + 1]
# bit depth
decompressed_new[8 + n_samples_new + 2] = decompressed[8 + p.n_samples + 2]
offset = 10 + p.n_samples
offset_new = 10 + n_samples_new
base_bytes = p.bit_depth ÷ 8
# write each genotype
for (i, m) in enumerate(sample_mask)
current_block_length = if p.phased == 1
base_bytes * (ploidy_old[i] & 0x3f) * (p.n_alleles - 1)
else
z = ploidy_old[i] & 0x3f
k = p.n_alleles
base_bytes * (binomial(z + k - 1, k - 1) - 1)
end
if m
decompressed_new[offset_new + 1 : offset_new + current_block_length] .=
decompressed[offset + 1 : offset + current_block_length]
offset_new += current_block_length
end
offset += current_block_length
end
decompressed = decompressed_new
@assert length(decompressed_new) == offset_new
end
if !use_zlib
compressed = transcode(ZstdCompressor(), decompressed)
else
compressed = transcode(ZlibCompressor(), decompressed)
end
write(io, UInt32(length(compressed) + 4))
write(io, UInt32(length(decompressed)))
write(io, compressed)
end
function get_decompressed_length(p::Preamble, d::Vector{UInt8}, sample_mask::BitVector)
cum_count = 10
n_samples_new = sum(sample_mask)
cum_count += n_samples_new
base_bytes = p.bit_depth ÷ 8
ploidy = d[9: 8 + p.n_samples]
if p.phased == 1
cum_count += base_bytes * sum(ploidy[sample_mask] .& 0x3f) * (p.n_alleles - 1)
else
for (i, m) in enumerate(sample_mask)
if m
z = ploidy[i] & 0x3f
k = p.n_alleles
cum_count += base_bytes * (binomial(z + k - 1, k - 1) - 1)
end
end
end
cum_count
end
function filter_samples(dest::AbstractString, sample_mask::BitVector;
sample_path=nothing, sample_names=nothing)
io = open(dest, "w")
if sample_path !== nothing
sample_io = open(sample_path)
println(io, readline(sample_io))
println(io, readline(sample_io))
for (i, sm) in enumerate(sample_mask)
l = readline(sample_io)
if sm
println(io, l)
end
end
close(sample_io)
else
@assert sample_names !== nothing "either `sample_path` or `sample_names` must be provided"
println(io, "ID_1")
println(io, "0")
for (i, sm) in enumerate(sample_mask)
if sm
println(io, sample_names[i])
end
end
end
close(io)
end
| BGEN | https://github.com/OpenMendel/BGEN.jl.git |
|
[
"MIT"
] | 0.2.1 | 6512022b637fc17c297bb3050ba2e6cdd50b6fa0 | code | 24829 | const lookup = [i / 255 for i in 0:510]
@inline function unsafe_load_UInt64(v::Vector{UInt8}, i::Integer)
p = convert(Ptr{UInt64}, pointer(v, i))
unsafe_load(p)
end
"""
Genotypes{T}(p::Preamble, d::Vector{UInt8}) where T <: AbstractFloat
Create `Genotypes` struct from the preamble and decompressed data string.
"""
function Genotypes{T}(p::Preamble, d::Vector{UInt8}) where T <: AbstractFloat
Genotypes{T}(p, d, nothing, 0, nothing, false, false)
end
const zlib = ZlibDecompressor()
@inline function zstd_uncompress!(input::Vector{UInt8}, output::Vector{UInt8})
r = ccall((:ZSTD_decompress, CodecZstd.libzstd),
Csize_t, (Ptr{Cchar}, Csize_t, Ptr{Cchar}, Csize_t),
pointer(output), length(output), pointer(input), length(input))
@assert r == length(output) "zstd decompression returned data of wrong length"
end
@inline function check_decompressed_length(io, v, h)
seek(io, v.geno_offset)
decompressed_field = 0
if h.compression != 0
if h.layout == 1
decompressed_length = 6 * h.n_samples
elseif h.layout == 2
decompressed_field = 4
decompressed_length = read(io, UInt32)
end
end
return decompressed_length, decompressed_field
end
"""
decompress(io, v, h; decompressed=nothing)
Decompress the compressed byte string for genotypes.
"""
function decompress(io::IOStream, v::BgenVariant, h::Header;
decompressed::Union{Nothing, AbstractVector{UInt8}}=nothing
)
compression = h.compression
seek(io, v.geno_offset)
decompressed_length, decompressed_field = check_decompressed_length(io, v, h)
if decompressed !== nothing
@assert length(decompressed) ==
decompressed_length "decompressed length mismatch"
else
decompressed = Vector{UInt8}(undef, decompressed_length)
end
compressed_length = v.next_var_offset - v.geno_offset - decompressed_field
buffer_compressed = read(io, compressed_length)
if compression == 0
decompressed .= buffer_compressed
else
if compression == 1
codec = zlib
elseif compression == 2
codec = nothing
else
@error "invalid compression"
end
if compression == 1
input = Buffer(buffer_compressed)
output = Buffer(decompressed)
error = Error()
initialize(codec)
_, _, e = process(codec, buffermem(input), buffermem(output), error)
if e === :error
throw(error[])
end
finalize(codec)
elseif compression == 2
zstd_uncompress!(buffer_compressed, decompressed)
end
end
return decompressed
end
"""
parse_ploidy(ploidy, d, idx, n_samples)
Parse ploidy part of the preamble.
"""
function parse_ploidy(ploidy::Union{UInt8,AbstractVector{UInt8}}, d::AbstractVector{UInt8},
n_samples::Integer)
missings = Int[]
mask = 0x3f # 63 in UInt8
mask_8 = 0x8080808080808080 # UInt64, mask for missingness
idx1 = 9
if typeof(ploidy) == UInt8 # if constant ploidy, just scan for missingness
# check eight samples at a time
if n_samples >= 8
@inbounds for i in 0:8:(n_samples - (n_samples % 8) - 1)
if mask_8 & unsafe_load_UInt64(d, idx1 + i) != 0
for j in (i+1):(i+8)
if d[idx1 + j - 1] & 0x80 != 0
push!(missings, j)
end
end
end
end
end
# remainder not in multiple of 8
@inbounds for j in (n_samples - (n_samples % 8) + 1):n_samples
if d[idx1 + j - 1] & 0x80 != 0
push!(missings, j)
end
end
else
@inbounds for j in 1:n_samples
ploidy[j] = mask & d[idx1 + j - 1]
if d[idx1 + j - 1] & 0x80 != 0
push!(missings, j)
end
end
end
return missings
end
@inline function get_max_probs(max_ploidy, n_alleles, phased)
phased == 1 ?
n_alleles : binomial(max_ploidy + n_alleles - 1, n_alleles - 1)
end
"""
parse_preamble(d, idx, h, v)
Parse preamble of genotypes.
"""
function parse_preamble(d::AbstractVector{UInt8}, h::Header, v::BgenVariant)
startidx = 1
if h.layout == 1
n_samples = h.n_samples
n_alleles = 2
phased = false
min_ploidy = 0x02
max_ploidy = 0x02
bit_depth = 16
elseif h.layout == 2
n_samples = reinterpret(UInt32, @view(d[startidx:startidx+3]))[1]
startidx += 4
@assert n_samples == h.n_samples "invalid number of samples"
n_alleles = reinterpret(UInt16, @view(d[startidx:startidx+1]))[1]
startidx += 2
@assert n_alleles == v.n_alleles "invalid number of alleles"
min_ploidy = d[startidx]
startidx += 1
max_ploidy = d[startidx]
startidx += 1
else
@error "invalid layout"
end
constant_ploidy = (min_ploidy == max_ploidy)
if constant_ploidy
ploidy = max_ploidy
else
ploidy = Vector{UInt8}(undef, n_samples)
fill!(ploidy, 0)
end
missings = []
if h.layout == 2
# this function also parses missingness.
missings = parse_ploidy(ploidy, d, n_samples)
startidx += n_samples
phased = d[startidx]
startidx += 1
bit_depth = d[startidx]
startidx += 1
end
max_probs = get_max_probs(max_ploidy, n_alleles, phased)
Preamble(n_samples, n_alleles, phased, min_ploidy, max_ploidy, ploidy,
bit_depth, max_probs, missings)
end
"""
parse_layout1!(data, p, d, startidx)
Parse probabilities from layout 1.
"""
function parse_layout1!(data::AbstractArray{<:AbstractFloat},
p::Preamble, d::AbstractArray{UInt8}, startidx::Integer
)
@assert length(data) == p.n_samples * p.max_probs
factor = 1.0 / 32768
idx = startidx
@fastmath @inbounds for i in 1:p.max_probs:(p.n_samples * p.max_probs)
j = idx
data[i] = reinterpret(UInt16, @view(d[j : j + 1]))[1] * factor
data[i + 1] = reinterpret(UInt16, @view(d[j + 2 : j + 3]))[1] * factor
data[i + 2] = reinterpret(UInt16, @view(d[j + 4 : j + 5]))[1] * factor
idx += 6
# triple zero denotes missing for layout1
if data[i] == 0.0 && data[i+1] == 0.0 && data[i+2] == 0.0
data[i:i+2] .= NaN
push!(p.missings, (i-1) ÷ p.max_probs + 1)
end
end
return data
end
"""
parse_layout2!(data, p, d, startidx)
Parse probabilities from layout 2.
"""
function parse_layout2!(data::AbstractArray{<:AbstractFloat},
p::Preamble, d::AbstractArray{UInt8}, startidx::Integer
)
constant_ploidy = p.max_ploidy == p.min_ploidy
if p.phased == 0
nrows = p.n_samples
else
if constant_ploidy
nrows = p.n_samples * p.max_ploidy
else
nrows = sum(p.ploidy)
end
end
@assert length(data) == p.max_probs * nrows
max_less_1 = p.max_probs - 1
prob = 0.0
factor = 1.0 / (2 ^ p.bit_depth - 1)
# mask for depth not multiple of 8
probs_mask = 0xFFFFFFFFFFFFFFFF >> (64 - p.bit_depth)
bit_idx = 0
if constant_ploidy && p.max_probs == 3 && p.bit_depth == 8
# fast path for unphased, ploidy==2, 8 bits per prob.
idx1 = startidx
@inbounds for offset in 1:3:(3 * nrows)
idx2 = 2 * ((offset-1) ÷ 3)
first = d[idx1 + idx2]
second = d[idx1 + idx2 + 1]
data[offset] = lookup[first + 1]
data[offset + 1] = lookup[second + 1]
data[offset + 2] = lookup[256 - first - second]
end
else
idx1 = startidx
@inbounds for offset in 1:p.max_probs:(nrows * p.max_probs)
# number of probabilities to be read per row
if constant_ploidy
n_probs = max_less_1
elseif p.phased == 1
n_probs = p.n_alleles - 1
elseif p.ploidy[offset ÷ p.max_probs + 1] == 2 && p.n_alleles == 2
n_probs = 2
else
n_probs = binomial(p.ploidy[offset ÷ p.max_probs + 1] +
p.n_alleles - 1, p.n_alleles - 1) - 1
end
remainder = 1.0
@inbounds for i in 1:n_probs
j = idx1 + bit_idx ÷ 8
@inbounds prob = (unsafe_load_UInt64(d, j) >> (bit_idx % 8) &
probs_mask) * factor
bit_idx += p.bit_depth
remainder -= prob
data[offset + i - 1] = prob
end
data[offset + n_probs] = remainder
if n_probs + 1 < p.max_probs
data[(offset + n_probs + 1):(offset + p.max_probs - 1)] .= NaN
end
end
end
for m in p.missings
offset = p.max_probs * (m - 1) + 1
data[offset:(offset + p.max_probs - 1)] .= NaN
end
return data
end
"""
first_dosage_fast!(data, p, d, idx, layout)
Dosage retrieval for 8-bit biallele case, no floating-point operations!
"""
const one_255th = 1.0f0 / 255.0f0
const mask_odd = reinterpret(Vec{16, UInt16}, Vec{32, UInt8}(
tuple(repeat([0xff, 0x00], 16)...)))
const mask_even = reinterpret(Vec{16, UInt16}, Vec{32, UInt8}(
tuple(repeat([0x00, 0xff], 16)...)))
function first_dosage_fast!(data::Vector{T}, p::Preamble,
d::Vector{UInt8}, startidx::Integer, layout::UInt8
) where {T <:AbstractFloat}
@assert length(data) == p.n_samples
@assert layout == 2
@assert p.bit_depth == 8 && p.max_probs == 3 && p.max_ploidy == p.min_ploidy
idx1 = startidx
if p.n_samples >= 16
@inbounds for n in 1:16:(p.n_samples - p.n_samples % 16)
idx_base = idx1 + ((n-1) >> 1) << 2
r = reinterpret(Vec{16, UInt16}, vload(Vec{32, UInt8}, d, idx_base))
second = (r & mask_even) >> 8
first = (r & mask_odd) << 1
dosage_level = first + second
dosage_level_float = one_255th * convert(
Vec{16, T}, dosage_level)
vstore(dosage_level_float, data, n)
end
end
rem = p.n_samples % 16
if rem != 0
@inbounds for n in ((p.n_samples - rem) + 1) : p.n_samples
idx_base = idx1 + ((n - 1) << 1)
data[n] = lookup[d[idx_base] * 2 +
d[idx_base + 1] + 1]
end
end
return data
end
"""
first_dosage_slow!(data, p, d, idx, layout)
Dosage computation for general case.
"""
function first_dosage_slow!(data::Vector{<:AbstractFloat}, p::Preamble,
d::Vector{UInt8}, startidx::Integer, layout::UInt8
)
@assert length(data) == p.n_samples
ploidy = p.max_ploidy
half_ploidy = ploidy / 2
maxval = 2 ^ p.bit_depth - 1
factor = layout == 2 ? 1.0 / maxval : 1.0 / 32768
probs_mask = 0xFFFFFFFFFFFFFFFF >> (64 - p.bit_depth)
bit_idx = 0
for n = 1:p.n_samples
if p.max_ploidy != p.min_ploidy
ploidy = ploidy[n]
half_ploidy = ploidy ÷ 2
end
j = startidx + bit_idx ÷ 8
hom = (unsafe_load_UInt64(d, j) >> (bit_idx % 8)) & probs_mask
bit_idx += p.bit_depth
j = startidx + bit_idx ÷ 8
het = (unsafe_load_UInt64(d, j) >> (bit_idx % 8)) & probs_mask
bit_idx += p.bit_depth
data[n] = ((hom * ploidy) + (het * half_ploidy)) * factor
if layout == 1
# layout 1 also stores hom_alt probability, and it indicates missing by
# triple zero.
j = startidx + bit_idx ÷ 8
hom_alt = (unsafe_load_UInt64(d, j) >> (bit_idx % 8)) & probs_mask
bit_idx += p.bit_depth
if hom == 0 && het == 0 && hom_alt == 0
push!(p.missings, n)
end
end
end
return data
end
"""
first_dosage_phased!(data, p, d, idx, layout)
Dosage computation for phased genotypes.
"""
function first_dosage_phased!(data::Vector{<:AbstractFloat}, p::Preamble,
d::Vector{UInt8}, startidx::Integer, layout::UInt8
)
@assert length(data) == p.n_samples
@assert layout == 2 "Phased genotypes not supported for Layout 1"
ploidy = p.max_ploidy
half_ploidy = ploidy / 2
maxval = 2 ^ p.bit_depth - 1
factor = 1.0 / maxval
probs_mask = 0xFFFFFFFFFFFFFFFF >> (64 - p.bit_depth)
bit_idx = 0
for n = 1:p.n_samples
if p.max_ploidy != p.min_ploidy
ploidy = ploidy[n]
half_ploidy = ploidy ÷ 2
end
first_level = 0
for _ in 1:ploidy
j = startidx + bit_idx ÷ 8
first_level += (unsafe_load_UInt64(d, j) >> (bit_idx % 8)) & probs_mask
bit_idx += p.bit_depth
end
data[n] = first_level * factor
end
return data
end
"""
second_dosage!(data, p)
Switch first allele dosage `data` to second allele dosage.
"""
function second_dosage!(data::Vector{<:AbstractFloat}, p::Preamble)
if p.n_samples >= 8
@inbounds for n in 1:8:(p.n_samples - p.n_samples % 8)
data[n] = 2.0 - data[n]
data[n + 1] = 2.0 - data[n + 1]
data[n + 2] = 2.0 - data[n + 2]
data[n + 3] = 2.0 - data[n + 3]
data[n + 4] = 2.0 - data[n + 4]
data[n + 5] = 2.0 - data[n + 5]
data[n + 6] = 2.0 - data[n + 6]
data[n + 7] = 2.0 - data[n + 7]
end
end
@inbounds for n in (p.n_samples - p.n_samples % 8 + 1):p.n_samples
data[n] = 2.0 - data[n]
end
end
"""
find_minor_allele(data, p)
Find minor allele index, returns 1 (first) or 2 (second)
"""
function find_minor_allele(data::Vector{<:AbstractFloat}, p::Preamble)
batchsize = 100
increment = max(p.n_samples ÷ batchsize, 1)
total = 0.0
freq = 0.0
cnt = 0
for idx2 in 1:increment
for n in idx2:increment:p.n_samples
cnt += 1
total += data[n]
end
freq = total / (cnt * 2)
@assert 0 <= freq <= 1
if minor_certain(freq, batchsize * idx2, 5.0)
break
end
end
if freq <= 0.5
return 1
else
return 2
end
end
@inline function get_data_size(p::Preamble, layout::Integer)
if layout == 1
return p.n_samples * p.max_probs
else
constant_ploidy = p.max_ploidy == p.min_ploidy
if p.phased == 0
nrows = p.n_samples
else
if constant_ploidy
nrows = p.n_samples * p.max_ploidy
else
nrows = sum(p.ploidy)
end
end
return p.max_probs * nrows
end
end
function _get_prob_matrix(d::Vector{T}, p::Preamble) where {T <: AbstractFloat}
reshaped = reshape(d, p.max_probs, :)
if p.phased == 1
current = 1
ragged = Matrix{T}(undef, p.max_ploidy * p.max_probs, p.n_samples)
fill!(ragged, NaN)
if p.max_ploidy == p.min_ploidy
for i in 1:p.n_samples
for j in 1:p.max_ploidy
first = (j-1) * p.max_probs + 1
last = j * p.max_probs
ragged[first:last, i] = reshaped[:, current]
current += 1
end
end
else
for (i, v) in enumerate(p.ploidy)
for j in 1:v
first = (j-1) * p.max_probs + 1
last = j * p.max_probs
ragged[first:last, i] = reshaped[:, current]
current += 1
end
end
end
return ragged
else
return reshaped
end
end
"""
probabilities!(b::Bgen, v::BgenVariant; T=Float32, clear_decompressed=false)
Given a `Bgen` struct and a `BgenVariant`, compute probabilities.
The result is stored inside `v.genotypes.probs`, which can be cleared using
`clear!(v)`.
- T: type for the resutls
- `clear_decompressed`: clears decompressed byte string after execution if set `true`
"""
function probabilities!(b::Bgen, v::BgenVariant;
T=Float32, clear_decompressed=false, data=nothing, decompressed=nothing, is_decompressed=false)
io, h = b.io, b.header
if (decompressed !== nothing && !is_decompressed) ||
(decompressed === nothing && (v.genotypes === nothing ||
v.genotypes.decompressed === nothing))
decompressed = decompress(io, v, h; decompressed=decompressed)
else
decompressed = v.genotypes.decompressed
end
if v.genotypes === nothing
p = parse_preamble(decompressed, h, v)
v.genotypes = Genotypes{T}(p, decompressed)
else
p = v.genotypes.preamble
end
startidx = 1
if h.layout == 2
startidx += 10 + h.n_samples
end
genotypes = v.genotypes
data_size = get_data_size(p, h.layout)
# skip parsing if already parsed
if genotypes.probs !== nothing && length(genotypes.probs) >= data_size
_get_prob_matrix(genotypes.probs, p)
end
if data !== nothing
@assert length(data) == data_size
genotypes.probs = data
else
genotypes.probs = Vector{T}(undef, data_size)
end
if h.layout == 1
parse_layout1!(genotypes.probs, p, decompressed, startidx)
elseif h.layout == 2
parse_layout2!(genotypes.probs, p, decompressed, startidx)
end
if clear_decompressed
clear_decompressed!(genotypes)
end
return _get_prob_matrix(genotypes.probs, p)
end
function first_allele_dosage!(b::Bgen, v::BgenVariant;
T=Float32, mean_impute=false, clear_decompressed=false,
data=nothing, decompressed=nothing, is_decompressed=false)
io, h = b.io, b.header
# just return it if already computed
if v.genotypes !== nothing && v.genotypes.dose !== nothing
genotypes = v.genotypes
p = genotypes.preamble
if genotypes.dose_mean_imputed && !mean_impute
genotypes.dose[p.missings] .= NaN
elseif !genotypes.dose_mean_imputed && mean_impute
genotypes.dose[p.missings] .= mean(filter(!isnan, genotypes.dose))
end
if genotypes.minor_allele_dosage && genotypes.minor_idx != 1
second_dosage!(genotypes.dose, p)
end
genotypes.minor_allele_dosage = (genotypes.minor_idx == 1)
return v.genotypes.dose
end
if (decompressed !== nothing && !is_decompressed) ||
(decompressed === nothing && (v.genotypes === nothing ||
v.genotypes.decompressed === nothing))
decompressed = decompress(io, v, h; decompressed=decompressed)
else
decompressed = v.genotypes.decompressed
end
startidx = 1
if v.genotypes === nothing
p = parse_preamble(decompressed, h, v)
v.genotypes = Genotypes{T}(p, decompressed)
else
p = v.genotypes.preamble
end
if h.layout == 2
startidx += 10 + h.n_samples
end
@assert p.n_alleles == 2 "allele dosages are available for biallelic variants"
#@assert p.phased == 0
genotypes = v.genotypes
if data !== nothing
@assert length(data) == h.n_samples
else
genotypes.dose = Vector{T}(undef, h.n_samples)
data = genotypes.dose
end
if p.phased == 0
if p.max_ploidy == p.min_ploidy && p.max_probs == 3 && p.bit_depth == 8 &&
b.header.layout == 2
first_dosage_fast!(data, p, decompressed, startidx, h.layout)
else
first_dosage_slow!(data, p, decompressed, startidx, h.layout)
end
else # phased
first_dosage_phased!(data, p, decompressed, startidx, h.layout)
end
genotypes.minor_idx = find_minor_allele(data, p)
data[p.missings] .= NaN
if mean_impute
data[p.missings] .= mean(filter(!isnan, data))
end
if clear_decompressed
clear_decompressed!(genotypes)
end
genotypes.minor_allele_dosage = (genotypes.minor_idx == 1)
return data
end
function ref_allele_dosage!(b::Bgen, v::BgenVariant;
T=Float32, mean_impute=false, clear_decompressed=false,
data=nothing, decompressed=nothing, is_decompressed=false)
data = first_allele_dosage!(b, v;
T=T, mean_impute=mean_impute, clear_decompressed=clear_decompressed,
data=data, decompressed=decompressed, is_decompressed=is_decompressed)
if !b.ref_first
second_dosage!(data, v.genotypes.preamble)
end
data
end
function alt_allele_dosage!(b::Bgen, v::BgenVariant;
T=Float32, mean_impute=false, clear_decompressed=false,
data=nothing, decompressed=nothing, is_decompressed=false)
data = first_allele_dosage!(b, v;
T=T, mean_impute=mean_impute, clear_decompressed=clear_decompressed,
data=data, decompressed=decompressed, is_decompressed=is_decompressed)
if b.ref_first
second_dosage!(data, v.genotypes.preamble)
end
data
end
function alt_dosages!(arr::AbstractArray{T}, b::Bgen, v::BgenVariant;
mean_impute=false, clear_decompressed=false,
decompressed=nothing, is_decompressed=false) where T <: Real
alt_allele_dosage!(b, v; T=T, mean_impute=mean_impute, clear_decompressed=clear_decompressed, data=arr, decompressed=decompressed, is_decompressed=is_decompressed)
end
"""
minor_allele_dosage!(b::Bgen, v::BgenVariant; T=Float32,
mean_impute=false, clear_decompressed=false)
Given a `Bgen` struct and a `BgenVariant`, compute minor allele dosage.
The result is stored inside `v.genotypes.dose`, which can be cleared using
`clear!(v)`.
- `T`: type for the results
- `mean_impute`: impute missing values with the mean of nonmissing values
- `clear_decompressed`: clears decompressed byte string after execution if set `true`
"""
function minor_allele_dosage!(b::Bgen, v::BgenVariant;
T=Float32, mean_impute=false, clear_decompressed=false,
data=nothing, decompressed=nothing, is_decompressed=false)
# just return it if already computed
io, h = b.io, b.header
if v.genotypes !== nothing && v.genotypes.dose !== nothing
genotypes = v.genotypes
p = genotypes.preamble
if genotypes.dose_mean_imputed && !mean_impute
genotypes.dose[p.missings] .= NaN
elseif !genotypes.dose_mean_imputed && mean_impute
genotypes.dose[p.missings] .= mean(filter(!isnan, genotypes.dose))
end
if !genotypes.minor_allele_dosage && genotypes.minor_idx != 1
second_dosage!(genotypes.dose, p)
end
return v.genotypes.dose
end
first_allele_dosage!(b, v; T=T, mean_impute=mean_impute,
clear_decompressed=clear_decompressed, data=data, decompressed=decompressed,
is_decompressed=is_decompressed)
genotypes = v.genotypes
if data === nothing
data = genotypes.dose
end
if genotypes.minor_idx != 1
second_dosage!(data, genotypes.preamble)
end
genotypes.minor_allele_dosage = true
return data
end
"""
clear!(g::Genotypes)
clear!(v::BgenVariant)
Clears cached decompressed byte representation, probabilities, and dose.
If `BgenVariant` is given, it removes the corresponding `.genotypes` altogether.
"""
function clear!(g::Genotypes)
g.decompressed = nothing
g.probs = nothing
g.dose = nothing
return
end
"""
clear_decompressed!(g::Genotypes)
Clears cached decompressed byte representation.
"""
function clear_decompressed!(g::Genotypes)
g.decompressed = nothing
return
end
"""
hardcall!(c::AbstractArray{I}, d::AbstractArray{T}; threshold=0.1) where {I, T}
Hard genotype calls for dosages. `d` is the dosage vector, and `c` is filled with the hard called genotypes
with values 0, 1, 2, or 9 (for missing). `threshold` determines maximum distance between the hardcall and the dosage.
`threshold` must be in [0, 0.5).
"""
function hardcall!(c::AbstractArray{I}, d::AbstractArray{T}; threshold=0.1) where {I <: Integer, T <: AbstractFloat}
@assert 0 <= threshold < 0.5
for i in eachindex(d)
c[i] = d[i] < threshold ? 0 : (
1 - threshold < d[i] < 1 + threshold ? 1 : (
d[i] > 2 - threshold ? 2 : 9
))
end
c
end
"""
hardcall(d::AbstractArray{T}; threshold=0.1) where {I, T}
Hard genotype calls for dosages. `d` is the dosage vector, the return UInt8 vector is filled with the hard called genotypes
with values 0, 1, 2, or 9 (for missing). `threshold` determines maximum distance between the hardcall and the dosage.
`threshold` must be in [0, 0.5).
"""
function hardcall(d::AbstractArray{T}; threshold=0.1) where T <: AbstractFloat
c = Vector{UInt8}(undef, length(d))
hardcall!(c, d; threshold=threshold)
end
| BGEN | https://github.com/OpenMendel/BGEN.jl.git |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.